System and method for sharing SAN storage

Information

  • Patent Grant
  • 11228647
  • Patent Number
    11,228,647
  • Date Filed
    Thursday, February 13, 2020
    4 years ago
  • Date Issued
    Tuesday, January 18, 2022
    2 years ago
Abstract
According to various embodiments, systems and methods are provided that relate to shared access to Storage Area Networks (SAN) devices. In one embodiment, a Storage Area Network (SAN) host is provided, comprising: a server component: a first host bus adapter configured to be connected to a SAN client over a first SAN; a second host bus adapter configured to be connected to a SAN storage device over a second SAN; and wherein the server component is configured to manage a data block on the SAN storage device, receive a storage operation request from the SAN client through the first host bus adapter, and in response to the storage operation request, perform a storage operation on the data block, the storage operation being performed over the second SAN through the second host bus adapter.
Description
RELATED APPLICATIONS

Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet, or any correction thereto, are hereby incorporated by reference into this application under 37 CFR 1.57.


BACKGROUND OF THE INVENTION
Field of the Invention

The present invention generally relates to data storage, and more particularly, some embodiments relate to Storage Area Network (SAN) systems and methods.


Description of the Related Art

The storage and retrieval of data is an age-old art that has evolved as methods for processing and using data have evolved. In the early 18th century, Basile Bouchon is purported to have used a perforated paper loop to store patterns used for printing cloth. In the mechanical arts, similar technology in the form of punch cards and punch tape were used in the 18th century in textile mills to control mechanized looms. Two centuries later, early computers also used punch cards and paper punch tape to store data and to input programs.


However, punch cards were not the only storage mechanism available in the mid 20th century. Drum memory was widely used in the 1950s and 1960s with capacities approaching about 10 kb, and the first hard drive was developed in the 1950s and is reported to have used 50 24-inch discs to achieve a total capacity of almost 5 MB. These were large and costly systems and although punch cards were inconvenient, their lower cost contributed to their longevity as a viable alternative.


In 1980, the hard drive broke the 1 GB capacity mark with the introduction of the IBM 3380, which could store more than two gigabytes of data. The IBM 3380, however, was about as large as a refrigerator, weighed ¼ ton, and cost between approximately $97,000 to $142,000, depending on features selected. In contrast, contemporary storage systems now provide storage for hundreds of terabytes of data, or more, for seemingly instantaneous access by networked devices. Even handheld electronic devices such as digital cameras, MP3 players and others are capable of storing gigabytes of data, and modern desktop computers boast gigabytes or terabytes of storage capacity.


With the advent of networked computing, storage of electronic data has also expanded from the individual computer to network-accessible storage devices. These include, for example, optical libraries, Redundant Arrays of Inexpensive Disks (RAID), CD-ROM jukeboxes, drive pools and other mass storage technologies. These storage devices are accessible to and can be shared by individual computers using such traditional networks as Local Area Networks (LANs) and Wide Area Networks (WANs), or using Storage Area Networks (SANs). These client computers not only access their own local storage devices but also network storage devices to perform backups, transaction processing, file sharing, and other storage-related operations.


Network bandwidth is limited and can be overloaded by volumes of data stored and shared by networked devices. During operations such as system backups, transaction processing, file copying and transfer, and other similar operations, the network communication bandwidth often becomes the rate-limiting factor.


SANs, in particular, are networks designed to facilitate transport of data to and from network storage devices, while addressing the bandwidth issues caused by large volumes of data stored and shared on the network storage devices. Specifically, SANs are network architectures that comprise a network of storage devices that are generally not accessible by nodes on a traditional network (e.g., LAN or WAN). As such, a SAN implementation usually requires two networks. The first network is a traditional network, such as a LAN, designed to transport ordinary traffic between individual network computers (i.e., nodes). The second network is the SAN itself, which is accessible by individual computers through the SAN but not through the traditional network. Typically, once a SAN storage device (also referred to as a SAN storage node) is remotely attached to an individual computer over a SAN, it appears and functions much like a locally attached storage device (as opposed to appearing and functioning as a network drive).


By utilizing a SAN as a separate network for storage devices that perform bandwidth-intensive operations (e.g., backups, transaction processing, and the like), the SAN storage devices realize improved bandwidth among themselves and with traditional computers attached to the SAN. Additionally, when storage devices and traditional nodes communicate over the SAN, more bandwidth-intensive operations are performed over the SAN rather than a LAN, leaving the LAN to handle only the ordinary data traffic.



FIG. 1 illustrates an example of a traditional SAN implementation 10. There are multiple client nodes 15, 18, and 21 networked together using a LAN 1 which allows communication of ordinary data traffic between the nodes (15, 18, 21). Storage devices 12 are connected together through SAN 13, which provides high bandwidth network capacity for bandwidth-intensive data operations to and from the storage devices 12. As illustrated, client nodes 18 and 21 are also connected to SAN 13, allowing them high bandwidth data access to the storage devices 12. As discussed above, by utilizing the SAN to perform high bandwidth data access, the client nodes are not only moving bandwidth-intensive data operations from the LAN 19 to the SAN 13, but also accessing the data at higher data rates than are typically available on a traditional network such as LAN. Typically, SANs utilize high bandwidth network technologies, such as Fiber Channel (FC), InfiniBand, Internet Small Computer System Interface (iSCSI), HyperSCSI, and Serial Attached SCSI (SAS), which are not commonly utilized in traditional networks such as LANs.


SUMMARY OF THE INVENTION

Various embodiments of the invention relate to shared access to Storage Area Networks (SAN) storage devices, such as disk arrays, tape libraries, or optical jukeboxes. Embodiments of the present invention allow for managed, shared access to SAN storage devices while ensuring that data to and from SAN storage devices traverses over the SAN, and not traditional networks such as LANs or WANs. Embodiments of the present invention can also provide traditional client nodes with dynamic/on-demand provisioning of storage on SAN storage devices, and with concurrent read/write access to SAN storage devices.


In one embodiment, a Storage Area Network (SAN) host is provided, comprising: a server component; a first host bus adapter configured to be connected to a SAN client over a first SAN; a second host bus adapter configured to be connected to a SAN storage device over a second SAN; and wherein the server component is configured to manage a data block on the SAN storage device, provide the SAN client with block access to the data block on the SAN storage device, receive a storage operation request from the SAN client through the first host bus adapter, and in response to the storage operation request, perform a storage operation on the data block, the storage operation being performed over the second SAN through the second host bus adapter. By providing the block access to the SAN storage device, the SAN host can appear to the SAN client as a locally attached storage device. The SAN client may then access data made available from the SAN storage device by the SAN host as if the SAN client has direct disk access to the SAN storage device.


For some embodiments, the first host bus adapter is configured to provide the SAN client with block access to data on the SAN storage device through the first host bus adapter, while the second host bus adapter is configured to provide the SAN host with block access to data on the SAN storage device through the second host bus adapter.


In some embodiments, the first host bus adapter connects to the SAN client through the first host bus adapter using Fiber Channel (FC), InfiniBand, or Serial Attached SCSI (SAS). In additional embodiments, the second host bus adapter connects to the SAN storage device through the second host bus adapter using Fiber Channel (FC), InfiniBand, or Serial Attached SCSI (SAS). Additionally, in some embodiments, the first host bus adapter may be in target mode, so that the host bus adapter may operate as a server of data, while the second host bus adapter may be in initiator mode, so that the second host bus adapter may operate as a client of data.


In further embodiments, the server component is configured to manage shared access to one or more SAN storage devices. This shared access management may be performed by way of a data repository that the server component utilizes to track and maintain one or more SAN storage devices within a pool of storage resources. As this tracking and maintenance may involve managing blocks of data within the pool of storage resources, in the some embodiments the repository is utilized to manage the blocks of data on one or more SAN storage devices. Additionally, the server component may use the data repository to provision and determine allocation of storage space within the pool to a client node. Further, the server component may be configured to perform dynamic/on-demand provisioning of storage space on the SAN storage device for the SAN client as requested.


In some embodiments, the SAN storage device comprises a plurality of SAN storage devices managed by the server component as a pool of storage resources. In some such embodiments, the server component is further configured to add a new SAN storage device to the pool when the new SAN storage device is added to the second SAN.


In further embodiments, the server component may be further configured to perform data de-duplication on the SAN storage device while performing a storage operation.


By managing shared access, the server component is able to operate as an arbitrator of storage operation requests it receives. As such, in some embodiments, the server component determine whether the storage operation request is performed as the storage operation, and when the storage operation request is performed as the storage operation. This determination, for example, may be performed based on one or more settings and parameters stored on the SAN host system. Depending on the embodiment, the settings and parameters may be stored in and retrieved from a data repository accessible to the server component.


Additionally, the server component may configured to provide a plurality of SAN clients with concurrent access to the SAN storage. As such, the server component may be configured to arbitrate between a plurality of storage operation requests when they are received, either from a single SAN client or from multiple SAN clients.


Depending on the embodiment, the first storage operation or the second storage operation may be a file read, a file write, a file create, or a file delete operation. Depending on the embodiment, the first storage operation may be a discovery request to the system.


In additional embodiments, a Storage Area Network (SAN) client is provided, comprising: a client component; a host bus adapter configured to be connected to a SAN host over a first SAN; wherein the client component is configured to receive from the SAN client a request to perform a first storage operation on a SAN storage device, translate the first storage operation to a SAN host storage operation request, and send the SAN host storage operation request to the SAN host over the first SAN, the SAN host being configured to receive the storage operation request, and in response to the storage operation request, perform a second storage operation on the SAN storage device over a second SAN. The host bus adapter of the SAN client may be set to initiator mode. In some such embodiments, the client component receives the request to perform first storage operation through an application program interface (API) function call.


In some embodiments, a method for a Storage Area Network (SAN) host is provided, comprising: receiving from a SAN client a request to perform a first storage operation on a SAN storage device, wherein the request is received over a first SAN through a first host bus adapter; and in response to the request, performing a second storage operation on the SAN storage device, wherein the second storage operation is performed over a second SAN through a second host bus adapter. In some such embodiments, the method further comprises: arbitrating between a plurality of storage operation requests. In additional such embodiments, the method further comprises: detecting a new SAN device on the second SAN; and adding the new SAN storage to a pool of storage resources. In other such embodiments, the method further comprises: receiving a discovery request from the SAN client; and transmitting a discovery response to the SAN client, wherein the discovery response represents the SAN host as a traditional SAN storage device.


In yet further embodiments, a Storage Area Network (SAN) system is provided, the system comprising a SAN host in accordance with an embodiment of the present invention, and a SAN client in accordance with an embodiment of the present invention. Other features and aspects of the invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the features in accordance with embodiments of the invention. The summary is not intended to limit the scope of the invention, which is defined solely by the claims attached hereto.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention, in accordance with one or more various embodiments, is described in detail with reference to the following Figure. The drawings are provided for purposes of illustration only and merely depict typical or example embodiments of the invention. These drawings are provided to facilitate the reader's understanding of the invention and shall not be considered limiting of the breadth, scope, or applicability of the invention. It should be noted that for clarity and ease of illustration these drawings are not necessarily made to scale.



FIG. 1 is a diagram illustrating an example traditional Storage Area Network (SAN).



FIG. 2 is a diagram illustrating example of data storage algorithms and architectures that can be used in conjunction with the systems and methods in accordance with embodiments of the present invention.



FIG. 3 is a diagram illustrating an example SAN in accordance with one embodiment of the present invention.



FIG. 4 is a diagram illustrating an example sequence of interactions between entities of a SAN in accordance with one embodiment of the present invention.



FIGS. 5A and 5B are flowcharts of example methods in accordance with embodiments of the present invention.



FIG. 6 is a diagram illustrating an example computing system with which aspects of the systems and methods described herein can be implemented in accordance with one embodiment of the present invention.





The Figures are not intended to be exhaustive or to limit the invention to the precise form disclosed. It should be understood that the invention can be practiced with modification and alteration, and that the invention be limited only by the claims and the equivalents thereof.


DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

The present invention relates to Storage Area Networks (SANs) and, more particularly, shared access to Storage Area Network (SAN) storage devices. Particular embodiments of the present invention allow for managed, shared access to SAN storage devices. Some such embodiments ensure that data to and from SAN storage devices traverses over the SAN, and remains off traditional networks such as LANs or WANs. Further embodiments can provide traditional client nodes with dynamic/on-demand provisioning of storage on SAN storage devices, while providing concurrent read/write access to SAN storage devices.


Depending on the embodiment, the shared and concurrent access may comprise simultaneous access to the same block of data, the same file, the same SAN storage device, or the same allocation of located on SAN storage device or within a pool of SAN storage devices (i.e., storage resources). In some embodiments, the shared and concurrent access may be implemented by way of a queue that is maintained and controlled by the server component. In further embodiments, the concurrent access may be implemented by way priority framework, whereby a first SAN client having higher priority access than a second SAN client may preempt data access from that second SAN client, or preempt the second SAN position in a queue.


Before describing the invention in detail, it is useful to describe a few example environments with which the invention can be implemented. The systems and methods described herein can be implemented using a number of different storage architectures. One such exemplary storage architecture is described with reference to Figure


Turning now to Figure the example storage operation cell 50 shown in FIG. 2 may performs storage operations on electronic data such as that in a computer network. As shown in this example, storage operation cell 50 may generally include a storage manager 100, a data agent 95, a media agent 105, and a storage device 115. The storage operation cell 50 may also include components such as a client 85, a data or information store 90, databases 110 and 111, jobs agent 120, an interface module 125, and a management agent 130. Each media agent 105 may control one or Input/Output (I/O) devices such as a Host Bus Adaptor (HBA) or other communications link for transferring data from client 85 to storage devices 115. Such a system and elements thereof are exemplary of a modular backup system such as the CommVault® QiNetix system, and also the CommVault GALAXY® backup system, available from CommVault Systems, Inc. of Oceanport, N.J., and further described in U.S. Pat. Nos. 7,035,880 and 7,620,710 each of which is incorporated herein by reference in its entirety.


A storage operation cell, such as cell 50, may generally include combinations of hardware and software components associated with performing storage operations on electronic data. Exemplary storage operation cells according to embodiments of the invention may include, CommCells as embodied in the QNet storage management system and the QiNetix storage management system by CommVault Systems of Oceanport, N.J. According to some embodiments of the invention, storage operation cell 50 may be related to backup cells and provide some or all of the functionality of backup cells as described in U.S. Pat. No. 7,395,282, which is also incorporated by reference in its entirety. It should be noted, however, that in certain embodiments, storage operation cells may perform additional types of storage operations and other types of storage management functions that are not generally offered by backup cells.


Turning now to FIG. 3, a diagram is provided illustrating an example SAN system 200 implemented in accordance with certain embodiments of the present invention. The illustrated system 200 includes a first SAN 203, a second SAN 209, and a SAN host 206. The first SAN 203, using SAN connections 212, connects SAN clients 218 together and connects the SAN clients 218 to the SAN host 206. A SAN client is any sort of computing device trying to access a SAN storage device. In the illustrated embodiment, the SAN clients 218 are desktop computers. Depending on embodiment, the SAN client 218 may be operating a media agent that controls one or more input/output (I/O) devices such as a host bus adapter (HBA) or other communication link for transferring data over SAN connections 212 on the first SAN 203. For example, the SAN client 218 may be connected to the first SAN 203 using a Fiber Channel HBA and a Fiber Channel connection. Using SAN connections 212, a SAN client 218 on the first SAN 203 may, for example, transfer data to and from a SAN storage device on the first SAN 203 or on another SAN (e.g., second SAN 209). For instance, the SAN client 218 may transfer data over the first SAN 203 and through the SAN connections 212, to the second SAN 209 via the illustrated SAN host 206.


It should be noted that references herein to data transfers should be understood to involve such storage operations as file creation, file deletion, file read, and file write. Additionally, one of ordinary skill in the art would understand and appreciate that data transfers described herein can be readily facilitated by other means of data operations in addition to just file operations (such as database operations).


Continuing with reference to FIG. 3, SAN host 206 is shown comprising a first host bus adapter (HBA) 230, which enables connections to SAN clients 218 through the first SAN 203, and a second host bus adapter (HBA) 233, which enables connections to SAN storage devices 221 via the second SAN 209. In the illustrated configuration, the first SAN 203 and the second SAN 209 are isolated from one another, thereby allowing the SAN host 206 to manage and control (e.g., arbitrate) shared access of the SAN storage devices 221 by the SAN clients 218. Both the first host bus adapter 230 and the second host bus adapter 233 could utilize different types of bus technologies to facilitate communication over their respective SANs. For example, either the first host bus adapter 230 or the second host bus adapter 233 may utilize such network technologies as Fiber Channel (FC), InfiniBand, Internet Small Computer System Interface (iSCSI), HyperSCSI, and Serial Attached SCSI (SAS). The first host bus adapter 230 or the second host bus adapter may simply be a traditional network interface, which allows such technologies as Internet Small Computer System Interface (iSCSI), Fiber Channel over Ethernet (FCoE), and ATA over Ethernet (AoE) to be utilized by SAN host 206 over the SANs.


As illustrated, the second SAN 209 connects SAN storage devices 221 together using SAN connections 215, and connects those SAN storage devices 221 to the SAN host 206. Similar to SAN clients 218, the SAN storage devices 221 may control one or more input/output (I/O) devices, such as HBAs or other communication links, that allow them to connect to the second SAN 209. The SAN storage devices 221 may use, for example, Fiber Channel HBA to connect to the second SAN 209. Using a SAN connection 215, a SAN storage device 221 may, for example, transfer data to and from a SAN client on the second SAN 209 or on another SAN (e.g., first SAN 203). For instance, the SAN storage device 218 may transfer data over the second SAN 209 to the second SAN 209 via the illustrated SAN host 206.


In some embodiments, the SAN host 206 operates as a conduit through which SAN clients 218, which may or may not be connected to a traditional network (e.g., LAN), can share access to one or more SAN storage devices 221 on a second SAN 209 over a first SAN 203. By doing so, such embodiments not only provide the SAN clients 218 shared data access to the SAN storage devices, but also provide such shared access without having to utilize a traditional network (e.g., LAN or WAN) or having the data leave a SAN. In effect, this allows sharing of bandwidth-intensive data to remain on the SAN without burdening the SAN clients traditional network (e.g., LAN or WAN).


The SAN host 206 may also function as a manager of storage operations, managing the blocks of data on one or more SAN storage devices. As manager, the SAN host 206 may also manage what storage operations are to be performed on SAN storage devices in response to a storage operation request from a SAN client. For example, SAN host 206 may ine1ude components that allow it to determine whether a storage operation should be performed on the SAN storage devices, and when a storage operation should be performed on the SAN storage devices. This management functionality may be utilized when, for example, two or more SAN clients are sharing access to a shared SAN storage device, and the SAN clients request concurrent access to the shared SAN storage device or pool, concurrent access to the same data on the shared SAN storage device, or concurrent access to the same allocation of storage space on the shared SAN storage. In further examples, this concurrent access may be to a pool of SAN storage devices rather than just a single SAN storage device. As such, the SAN host 206 may allow for concurrent shared access to one or more SAN storages devices while preventing deadlocks.


The management functionality of the SAN host 206 may also allow arbitration of two or multiple storage operation requests that arrive at relatively the same time, deciding which storage operation should be performed first, for example, based on such parameters as priority of the storage operation request.


Additionally, as part of data management functionality, the SAN host 206 may function to track and maintain one or more SAN storage devices as a pool of (SAN) storage resources (i.e., storage pool). In doing so, SAN host 206 may be allowed to, for example, dynamically provision (i.e., allocate) storage space from the pool for a given SAN client. For example, if the SAN host 206 were managing a pool of SAN storage resources totaling 5 TB in free space, and three SAN clients request 1 TB each of storage space, rather than statically reserving 1 TB of space within the pool to each of the SAN clients, the SAN host 206 can make a dynamic allocation of 1 TB to each of the SAN clients. In doing so, the SAN host is capable of growing a SAN client's storage space allocation as requested (i.e., on-demand).


Further, by managing the SAN storage devices as a pool of SAN storage resources, the SAN host 206 can readily manage the addition of new SAN storage devices to the pool, thereby allowing the pool to grow dynamically. Specifically, the storage pool may allow the SAN host 206 to dynamically add or remove one or more SAN storage devices (e.g., 221) from the pool, thereby increasing or decreasing the overall pool size, at times without the SAN clients even being made aware of such changes. It should be noted that, for some embodiments, the SAN host 206 is capable of managing and presenting dynamically allocated storage spaces as Logical Unit Numbers (LUNs).


In some embodiments, the dynamic (e.g., on-demand) provisioning (i.e., allocation) of storage space on the pool of SAN storage resources and the tracking and maintenance of the pool may be tied into the management function. For example, if a SAN client is writing to the shared pool of SAN storage resources and the pool reaches its capacity, the arbitrator could deny performance of the SAN client's storage write request.


In some embodiments, the SAN host 206 may manage the pool of SAN storage resources by way of a data repository (i.e., data store), which assists in the tracking and maintenance of the pool (e.g., tracking free storage space, tracking occupied storage space) and the allocation of storage space to SAN clients. In the illustrated embodiment, the tracking and maintenance of the pool of SAN storage resources (and, thus, the SAN storage devices 1) by the SAN host 206 is facilitated through data repository 236. Depending on the embodiment, the data repository 206 may be implemented as a data store, such as a database. Additionally, SAN host 206 may utilize the data repository 206 to manage data blocks within the storage pool. For example, management of data block may entail tracking ownership of data blocks to specific SAN clients, tracking storage of file data blocks that span multiple SAN storage devices (e.g., 221), tracking assignment of data blocks to specific allocated storage space, tracking occupied storage space within the storage pool, and tracking free storage space within the storage pool.


Through data repository 236, SAN host 206 can not only track dynamic provisioning and allocation of storage space within the storage pool to individual computing devices but, depending on the embodiment, can also dynamically add and remove SAN storage devices 1 from the storage pool. For example, when new SAN storage device is added to the second SAN 209, the SAN host 206 can add the new SAN storage device to the its storage pool. In some such embodiments, the SAN host 206 may perform the discovery and addition of the SAN storage device 242 to the pool automatically upon the addition of the SAN storage device 242 to the second SAN 209. For example, SAN host 206 may be configured to actively monitor the second SAN 209 for the addition of any new SAN storage devices, and add any such SAN storage device to the storage pool.


The SAN host 206 further comprises a server component 227. In some embodiments, the server component 227 is responsible for listening to and responding to storage operation requests it receives from the SAN clients 218. For example, the server component 227 may receive a file read, file write, file create, file delete storage operation request from a SAN client 218 and, in response, perform a corresponding storage operation on a SAN storage device 221 and may send a response back to the SAN client, depending on the storage operation request. According to embodiments that manage the SAN storage devices 221 as a pool of SAN storage resources, the corresponding storage operation may involve the server component 227 performing the storage operations on two or more SAN storage devices 221 within the storage pool. For example, the data blocks of a file involved in an operation may span three SAN storage devices 221 and, hence, in order to operate on the file, the SAN host 206 must perform a storage operation on those three SAN storage devices 221.


In some embodiments, the server component may also be configured to implement data de-duplication operations on the SAN storage devices 1, thereby increasing the over storage capacity of the SAN storage devices 1. For example, in particular embodiments, the data de-duplication may be implemented in the server component such that de-duplication is transparent to the SAN clients 218 jointly accessing the SAN storage devices 218. According to one embodiment, the deduplication may be facilitated through a hash table or other reference table that resides on the SAN host 206. The table references data that is shared amongst the SAN storage devices 221 managed by the SAN host 206. When the SAN host 206 is transferring data to the SAN storage devices 221, the SAN host 206 can use the table in a deduplication algorithm to determine if a data segment already exists on a SAN storage device 221. When it determines a copy already exists, the SAN host 206 may use the reference to an allocation of an existing copy of the data segment in place of the actual segment of data. Other de-duplication methodologies may be also employed by SAN host 206.


In some embodiments, when a client has data to transfer to or place in the shared storage, that client can run a deduplication algorithm on segments of the data and use its own representative instantiation of the reference table to determine whether the data segments already exist in a shared data store. Accordingly, for a given segment, the client can determine whether to send the entire data segment to the shared storage or just send a reference or pointer or other information from the reference table if the segment is duplicative of what is already in the data store. In a situation where the analyzed segment is not in the data store, the client device can send the hash value or other reference table information to the central storage (or other location maintaining the main reference table) so that the primary reference table can be updated with the information on the newly added segment.


Though not illustrated, the SAN clients 218 may comprise a client component that interfaces and interacts with the server component 227 of the SAN host 206 over the first SAN 203. For some embodiments, such a client component allows for seamless and transparent control of storage operations on the SAN storage devices 221 through the SAN host 206. For example, in some embodiments, the client component is able to receive file operation function calls through an application program interface (API), and then translate those function calls into storage operation requests for the SAN host 206 to perform. In this manner, the API encapsulates interactions between the client component and the server component, leaving the SAN client 218 unaware of the implementation of the SAN storage solution. Indeed, for some embodiments, the pool of SAN storage resources (e.g., SAN storage devices 221) appears as a traditional SAN storage device/resource. In this way, some embodiments of the present invention can readily integrate into existing SAN implementations with minimal to no change to the SAN implementation.



FIG. 4 provides an example sequence of interactions 300 between entities of a SAN in accordance with one embodiment of the present invention. Turning now to FIG. 4, the sequence begins with the establishment 312 of a Storage Area Network (SAN) connection between a SAN client 303 and a SAN host 306 over a first SAN, and the establishment 315 of a SAN connection between the SAN host 306 and a SAN storage device 309 over a second SAN. Once the connections are established, SAN client 303 performs a storage operation through an API function call. In some embodiments, the function call 321 instructs a client component residing on the SAN client 303 to transmit 324 to the SAN host 306 a storage operation request corresponding to the API function call 1. The client component thereby performs a storage operation request on behalf of the SAN client 303. Additionally, by instructing the client component through the API function call 32], the interactions between the SAN client 303 and SAN host 306 are encapsulated by the API. This eases integration of some embodiments into existing SANs.


Upon receiving the request from the SAN client 303, the SAN host 306 responds to the request, generally by sending one or more requests 327 to the SAN storage device 309 over a second SAN, which may invoke one or more responses 328 from the SAN storage device 309.


Subsequently, SAN host 306 may respond 330 to the SAN client 303 based on the response 328 from the SAN storage device 309 or the original request 324 from the SAN client. For example, where the SAN client 303 instructs its client component to perform a file read operation through an API, the client component would translate the instruction to a file read storage operation request, which is subsequently sent to the SAN host 306 (e.g., 321). The SAN host 306, in response to the file read storage operation request (e.g., 324), requests a file read operation from the SAN storage device 309 (e.g., 327), receives the file read data from the SAN storage device 309 (e.g., 328), and transmits that file read data back to the SAN client (330) in response to the original request (e.g., 324). In some embodiments, other storage operations, such as file writes, file creates, and file deletes, could have similar interaction flow.


Returning to Figure in some embodiments, the system 200 of FIG. 3 may be implemented into storage operation cell 50 of FIG. 2. For example, in one embodiment, system 200 could be implemented such that: the client 85 would operate as one of the SAN clients 218 of FIG. 3; the data agent 95 would operate as the client component that interfaces with SAN host 206 over a first SAN; the storage manager 100, media agents 105, and host bus adapters (HBAs) 133 would collectively operate as SAN host 206 of FIG. 3, where the storage manager 100 in conjunction with the media agents 105 would operate as the server component 227 of FIG. 3, and the HBAs 133 would operate as multiple second host bus adapters (233) of FIG. 3; and storage devices 115 would operate as the SAN storage devices communicating with the HBAs 133 over a second SAN.



FIGS. 5A and 5B provides flowcharts of example methods in accordance with embodiments of the present invention. Specifically, FIG. 5A provides a flowchart of method 400 of storage discovery operation in accordance with an embodiment of the present invention, while FIG. 5B provides a flowchart of a method 450 for performing a general file storage operation in another embodiment of the present invention.


Turning now to FIG. 5A, method 400 begins as operation 403 with a SAN client 218, 303) performing a discovery function call through an API. This causes the SAN client to send a discovery request to a SAN host at operation 406. Depending on the embodiment, when the discovery function call is executed in operation 403, the API may instruct a client component residing on the SAN client to transmit a discovery request to the SAN host in operation 406.


The SAN host, upon receiving the discovery request, may send a discovery response back to the client at operation 409. Through the discovery response, the SAN host may, for example, inform the client of its storage features or capabilities (e.g., available storage space, total storage space, occupied storage space). Additionally, some embodiments may respond to the client such that the SAN host appears as a traditional SAN storage device. Alternatively, in embodiments such as method 400, the client may interpret the discovery response from the SAN host (at operation 409) to be one from a traditional SAN storage device.


In alternative embodiments, the client, upon discovering the existence of the SAN host and acquiring its SAN identifier (e.g., World Wide Name for a Fiber Channel SAN), can send a discover request to the host bus adapter of the SAN host. The SAN host, in response informs the client regarding aspects of its storage, such as free storage space and occupied space.


Turning now to FIG. 5B, method 450 for performing a general file operation in accordance with an embodiment is presented. Similar to method 400, method 450 begins as operation 453 with a SAN client (e.g., 218, 303) performing a file operation function call through an API, which causes the SAN client to send a file operation request to a SAN host at operation 456. In some embodiments, when the file operation function call is executed in operation 453, the API instructs a client component residing on the SAN client to transmit the file operation request to the SAN host in operation 456.


At operation 459, method 450 continues with the SAN host arbitrating if, when, and how the SAN host will perform the requested file operation. Based on the results of the operation 459, the method 450 may then send a response to the client at operation 462. For example, the client may be requesting a file write to the SAN storage device, but may have already exhausted its storage space allocation. As a result, the SAN host, functioning as arbitrator, may deny the client its file write request and, accordingly, send the client a file write denial (e.g., at operation 462). Other embodiments may involve additional data operations (e.g., file reads, file creation, file deletion).


As used herein, the term module might describe a given unit of functionality that can be performed in accordance with one or more embodiments of the present invention. As used herein, a module might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, logical components, software routines or other mechanisms might be implemented to make up a module. In implementation, the various modules described herein might be implemented as discrete modules or the functions and features described can be shared in part or in total among one or more modules. In other words, as would be apparent to one of ordinary skill in the art after reading this description, the various features and functionality described herein may be implemented in any given application and can be implemented in one or more separate or shared modules in various combinations and permutations. Even though various features or elements of functionality may be individually described or claimed as separate modules, one of ordinary skill in the art will understand that these features and functionality can be shared among one or more common software and hardware elements, and such description shall not require or imply that separate hardware or software components are used to implement such features or functionality.


Where components or modules of the invention are implemented in whole or in part using software, in one embodiment, these software elements can be implemented to operate with a computing or processing module capable of carrying out the functionality described with respect thereto. One such example-computing module is shown in FIG. 6. Various embodiments are described in terms of this example-computing module 500. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the invention using other computing modules or architectures.


Referring now to FIG. 6, computing module 500 may represent, for example, computing or processing capabilities found within desktop, laptop and notebook computers; hand-held computing devices (PDA's, smart phones, cell phones, palmtops, etc.); mainframes, supercomputers, workstations or servers; or any other type of special-purpose or general-purpose computing devices as may be desirable or appropriate for a given application or environment. Computing module 500 might also represent computing capabilities embedded within or otherwise available to a given device. For example, a computing module might be found in other electronic devices such as, for example, digital cameras, navigation systems, cellular telephones, portable computing devices, modems, routers, W APs, terminals and other electronic devices that might include some form of processing capability.


Computing module 500 might include, for example, one or more processors, controllers, control modules, or other processing devices, such as a processor 504. Processor 504 might be implemented using a general-purpose or special-purpose processing engine such as, for example, a microprocessor, controller, or other control logic. In the example illustrated in FIG. 6, processor 504 is connected to a bus 502, although any communication medium can be used to facilitate interaction with other components of computing module 500 or to communicate externally.


Computing module 500 might also include one or more memory modules, simply referred to herein as main memory 508. For example, preferably random access memory (RAM) or other dynamic memory might be used for storing information and instructions to be executed by processor 504. Main memory 508 might also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 504. Computing module 500 might likewise include a read only memory (“ROM”) or other static storage device coupled to bus 502 for storing static information and instructions for processor 504.


The computing module 500 might also include one or more various forms of information storage mechanism 510, which might include, for example, a media drive 512 and a storage unit interface 520. The media drive 512 might include a drive or other mechanism to support fixed or removable storage media 514. For example, a hard disk drive, a floppy disk drive, a magnetic tape drive, an optical disk drive, a CD or DVD drive (R or R W), or other removable or fixed media drive might be provided. Accordingly, storage media 514, might include, for example, a hard disk, a floppy disk, magnetic tape, cartridge, optical disk, a CD or DVD, or other fixed or removable medium that is read by, written to or accessed by media drive 512. As these examples illustrate, the storage media 514 can include a computer usable storage medium having stored therein computer software or data.


In alternative embodiments, information storage mechanism 510 might include other similar instrumentalities for allowing computer programs or other instructions or data to be loaded into computing module 500. Such instrumentalities might include, for example, a fixed or removable storage unit 522 and an interface 520. Examples of such storage units 522 and interfaces 520 can include a program cartridge and cartridge interface, a removable memory (for example, a flash memory or other removable memory module) and memory slot, a PCMCIA slot and card, and other fixed or removable storage units and interfaces 520 that allow software and data to be transferred from the storage unit to computing module 500.


Computing module 500 might also include a communications interface 524. Communications interface 524 might be used to allow software and data to be transferred between computing module 500 and external devices. Examples of communications interface 524 might include a modem or softmodem, a network interface (such as an Ethernet, network interface card, Wi Media, IEEE 802.XX or other interface), a communications port (such as for example, a USB port, IR port, RS232 port Bluetooth® interface, or other port), or other communications interface. Software and data transferred via communications interface 524 might typically be carried on signals, which can be electronic, electromagnetic (which includes optical) or other signals capable of being exchanged by a given communications interface 524. These signals might be provided to communications interface 524 via a channel 528. This channel 528 might carry signals and might be implemented using a wired or wireless communication medium. These signals can deliver the software and data from memory or other storage medium in one computing system to memory or other storage medium in computing system 500. Some examples of a channel might include a phone line, a cellular link, an RF link, an optical link, a network interface, a local or wide area network, and other wired or wireless communications channels.


In this document, the terms “computer program medium” and “computer usable medium” are used to generally refer to physical storage media such as, for example, memory 508, storage unit 520, and media 514. These and other various forms of computer program media or computer usable media may be involved in storing one or more sequences of one or more instructions to a processing device for execution. Such instructions embodied on the medium, are generally referred to as “computer program code” or a “computer program product” (which may be grouped in the form of computer programs or other groupings). When executed, such instructions might enable the computing module 500 to perform features or functions of the present invention as discussed herein.


While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not of limitation. Likewise, the various diagrams may depict an example architectural or other configuration for the invention, which is done to aid in understanding the features and functionality that can be included in the invention. The invention is not restricted to the illustrated example architectures or configurations, but the desired features can be implemented using a variety of alternative architectures and configurations. Indeed, it will be apparent to one of skill in the art how alternative functional, logical or physical partitioning and configurations can be implemented to implement the desired features of the present invention. Also, a multitude of different constituent module names other than those depicted herein can be applied to the various partitions. Additionally, with regard to flow diagrams, operational descriptions and method claims, the order in which the steps are presented herein shall not mandate that various embodiments be implemented to perform the recited functionality in the same order unless the context dictates otherwise.


Although the invention is described above in terms of various exemplary embodiments and implementations, it should be understood that the various features, aspects and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described, but instead can be applied, alone or in various combinations, to one or more of the other embodiments of the invention, whether or not such embodiments are described and whether or not such features are presented as being a part of a described embodiment. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments.


Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. As examples of the foregoing: the term “including” should be read as meaning “including, without limitation” or the like; the term “example” is used to provide exemplary instances of the item in discussion, not an exhaustive or limiting list thereof; the terms “a” or “an” should be read as meaning “at least one,” “one or more” or the like; and adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. Likewise, where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such technologies encompass those apparent or known to the skilled artisan now or at any time in the future.


The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. The use of the term “module” does not imply that the components or functionality described or claimed as part of the module are all configured in a common package. Indeed, any or all of the various components of a module, whether control logic or other components, can be combined in a single package or separately maintained and can further be distributed in multiple groupings or packages or across multiple locations.


Additionally, the various embodiments set forth herein are described in terms of exemplary block diagrams, flow charts and other illustrations. As will become apparent to one of ordinary skill in the art after reading this document, the illustrated embodiments and their various alternatives can be implemented without confinement to the illustrated examples. For example, block diagrams and their accompanying description should not be construed as mandating a particular architecture or configuration.

Claims
  • 1. A Storage Area Network (SAN) host, comprising: a first storage client having higher priority access than a second storage client;at least a first SAN and a second SAN;one or more first SAN storage devices connected to the first SAN, wherein the one or more first SAN storage devices are not connected to the second SAN;one or more second SAN storage devices connected to the second SAN, wherein the one or more second SAN storage devices are not connected to the first SAN; anda SAN host server in communication with the first and second storage clients, the first SAN, and the second SAN, the SAN host server configured to: maintain at least one shared storage pool that shares the one or more first SAN storage devices connected to the first SAN and the one or more second SAN storage devices connected to the second SAN;maintain a priority queue associated with storage operations from the first and second storage clients; andwhen the SAN host server receives storage operations from the first and second storage clients that seek concurrent access to the at least one shared storage pool, the SAN host server adjusts the priority queue to prioritize access to the at least one shared storage pool by the first storage client having the higher priority access.
  • 2. The SAN host of claim 1, wherein the SAN host server is configured to preempt the second storage client from accessing the at least one shared storage pool.
  • 3. The SAN host of claim 1, wherein the SAN host server is further configured to perform dynamic provisioning of storage space for the at least one shared storage pool using the first and second client storage devices.
  • 4. The SAN host of claim 1, wherein the SAN host server is further configured to determine which storage operation associated with the first storage client is performed before other storage operations.
  • 5. The SAN host of claim 1, wherein the SAN host server connects to the first SAN with a first host bus adapter using Fiber Channel (FC), InfiniBand, or Serial Attached SCSI (SAS).
  • 6. The SAN host of claim 5, wherein the SAN host server connects to the second SAN with a second host bus adapter using Fiber Channel (FC), InfiniBand, or Serial Attached SCSI (SAS).
  • 7. The SAN host of claim 6, wherein the first host bus adapter is in target mode.
  • 8. The SAN host of claim 7, wherein the second host bus adapter is in initiator mode.
  • 9. The SAN host of claim 1, wherein the SAN host server is further configured to perform data de-duplication while performing a storage operation.
  • 10. The SAN host of claim 1, wherein the SAN host server is further configured to track storage of one or more data blocks on the one or more second SAN storage devices.
  • 11. A method for a Storage Area Network (SAN) host server, comprising: maintaining, with a SAN host server, at least one shared storage pool that shares one or more first SAN storage devices connected to a first SAN wherein the one or more first SAN storage devices are not connected to the second SAN, and one or more second SAN storage devices connected to a second SAN, wherein the one or more second SAN storage devices are not connected to the first SAN;maintaining, with the SAN host server, a priority queue associated with storage operations from at least a first storage client and a second storage client, the first storage client having higher priority access than the second storage client; andwhen the SAN host server receives storage operations from the first and second storage clients that seek concurrent access to the at least one shared storage pool, adjusting the priority queue to prioritize access to the at least one shared storage pool by the first storage client having the higher priority access.
  • 12. The method of claim 11, further comprising preempting the second storage client from accessing the at least one shared storage pool.
  • 13. The method of claim 11, further comprising dynamic provisioning storage space for the at least one shared storage pool using the first and second SAN storage devices.
  • 14. The method of claim 11, further comprising determining which storage operation associated with the first storage client is performed before other storage operations.
  • 15. The method of claim 11, further comprising connecting the SAN host server with the first SAN with a first host bus adapter using Fiber Channel (FC), InfiniBand, or Serial Attached SCSI (SAS).
  • 16. The method of claim 15, further comprising connecting the SAN host server to the second SAN with a second host bus adapter using Fiber Channel (FC), InfiniBand, or Serial Attached SCSI (SAS).
  • 17. The method of claim 16, wherein the first host bus adapter is in target mode.
  • 18. The method of claim 16, wherein the second host bus adapter is in initiator mode.
  • 19. The method of claim 11, further comprising performing data de-duplication while performing a storage operation.
  • 20. The method of claim 11, further comprising tracking storage of one or more data blocks on the one or more second SAN storage devices.
US Referenced Citations (626)
Number Name Date Kind
4296465 Lemak Oct 1981 A
4686620 Ng Aug 1987 A
4751639 Corcoran et al. Jun 1988 A
4995035 Cole et al. Feb 1991 A
5005122 Griffin et al. Apr 1991 A
5093912 Dong et al. Mar 1992 A
5125075 Goodale et al. Jun 1992 A
5133065 Cheffetz et al. Jul 1992 A
5140683 Gallo et al. Aug 1992 A
5163048 Heutink Nov 1992 A
5163148 Walls Nov 1992 A
5193154 Kitajima et al. Mar 1993 A
5204958 Cheng et al. Apr 1993 A
5212772 Masters May 1993 A
5212784 Sparks May 1993 A
5226157 Nakano et al. Jul 1993 A
5239647 Anglin et al. Aug 1993 A
5241668 Eastridge et al. Aug 1993 A
5241670 Eastridge et al. Aug 1993 A
5265159 Kung Nov 1993 A
5276860 Fortier et al. Jan 1994 A
5276867 Kenley et al. Jan 1994 A
5287500 Stoppani, Jr. Feb 1994 A
5301351 Jippo Apr 1994 A
5311509 Heddes et al. May 1994 A
5321816 Rogan et al. Jun 1994 A
5333251 Urabe et al. Jul 1994 A
5333315 Saether et al. Jul 1994 A
5347653 Flynn et al. Sep 1994 A
5386545 Gombos, Jr. et al. Jan 1995 A
5387459 Hung Feb 1995 A
5410700 Fecteau et al. Apr 1995 A
5426284 Doyle Jun 1995 A
5448718 Cohn et al. Sep 1995 A
5448724 Hayashi et al. Sep 1995 A
5455926 Keele et al. Oct 1995 A
5485606 Midgdey et al. Jan 1996 A
5491810 Allen Feb 1996 A
5495607 Pisello et al. Feb 1996 A
5504873 Martin et al. Apr 1996 A
5537568 Yanai et al. Jul 1996 A
5544345 Carpenter et al. Aug 1996 A
5544347 Yanai et al. Aug 1996 A
5555404 Torbjornsen et al. Sep 1996 A
5559957 Balk Sep 1996 A
5559991 Kanfi Sep 1996 A
5564037 Lam Oct 1996 A
5574898 Leblang et al. Nov 1996 A
5598546 Blomgren Jan 1997 A
5608865 Midgely et al. Mar 1997 A
5613134 Lucus et al. Mar 1997 A
5615392 Harrison et al. Mar 1997 A
5619644 Crockett et al. Apr 1997 A
5632012 Belsan et al. May 1997 A
5634052 Morris May 1997 A
5638509 Dunphy et al. Jun 1997 A
5642496 Kanfi Jun 1997 A
5649185 Antognini et al. Jul 1997 A
5659614 Bailey Aug 1997 A
5666501 Jones et al. Sep 1997 A
5673381 Huai et al. Sep 1997 A
5673382 Cannon et al. Sep 1997 A
5675511 Prasad et al. Oct 1997 A
5677900 Nishida et al. Oct 1997 A
5682513 Candelaria et al. Oct 1997 A
5687343 Fecteau et al. Nov 1997 A
5699361 Ding et al. Dec 1997 A
5719786 Nelson et al. Feb 1998 A
5729743 Squibb Mar 1998 A
5734817 Roffe et al. Mar 1998 A
5737747 Vishlitsky et al. Apr 1998 A
5740405 DeGraaf Apr 1998 A
5742807 Masinter Apr 1998 A
5751997 Kullick et al. May 1998 A
5758359 Saxon May 1998 A
5758649 Iwashita et al. Jun 1998 A
5761677 Senator et al. Jun 1998 A
5761734 Pfeffer et al. Jun 1998 A
5764972 Crouse et al. Jun 1998 A
5778165 Saxon Jul 1998 A
5778395 Whiting et al. Jul 1998 A
5790828 Jost Aug 1998 A
5793867 Cordery Aug 1998 A
5805920 Sprenkle et al. Sep 1998 A
5806058 Mori et al. Sep 1998 A
5806078 Hug et al. Sep 1998 A
5812398 Nielsen Sep 1998 A
5812748 Ohran et al. Sep 1998 A
5813009 Johnson et al. Sep 1998 A
5813013 Shakib et al. Sep 1998 A
5813017 Morris Sep 1998 A
5829045 Motoyama Oct 1998 A
5829046 Tzelnic et al. Oct 1998 A
5832522 Blickenstaff et al. Nov 1998 A
5835953 Ohran Nov 1998 A
5845257 Fu et al. Dec 1998 A
5860073 Ferrel et al. Jan 1999 A
5860104 Witt et al. Jan 1999 A
5864846 Voorhees et al. Jan 1999 A
5864871 Kitain et al. Jan 1999 A
5875478 Blumenau Feb 1999 A
5875481 Ashton et al. Feb 1999 A
5878230 Weber et al. Mar 1999 A
5881311 Woods Mar 1999 A
5884067 Storm et al. Mar 1999 A
5887134 Ebrahim Mar 1999 A
5893139 Kamiyama Apr 1999 A
5896531 Curtis et al. Apr 1999 A
5897642 Capossela et al. Apr 1999 A
5898431 Webster et al. Apr 1999 A
5901327 Ofek May 1999 A
5924102 Perks Jul 1999 A
5926836 Blumenau Jul 1999 A
5933104 Kimura Aug 1999 A
5933601 Fanshier et al. Aug 1999 A
5950205 Aviani, Jr. Sep 1999 A
5956519 Wise et al. Sep 1999 A
5956733 Nakano et al. Sep 1999 A
5958005 Thorne et al. Sep 1999 A
5966730 Zulch Oct 1999 A
5970030 Dimitri et al. Oct 1999 A
5970233 Liu et al. Oct 1999 A
5970255 Tran et al. Oct 1999 A
5974563 Beeler, Jr. Oct 1999 A
5978841 Berger Nov 1999 A
5983239 Cannon Nov 1999 A
5983368 Noddings Nov 1999 A
5987478 See et al. Nov 1999 A
5991753 Wilde Nov 1999 A
5995091 Near et al. Nov 1999 A
6000020 Chin et al. Dec 1999 A
6003089 Shaffer et al. Dec 1999 A
6009274 Fletcher et al. Dec 1999 A
6012053 Pant et al. Jan 2000 A
6012090 Chung et al. Jan 2000 A
6012415 Linseth Jan 2000 A
6016553 Schneider et al. Jan 2000 A
6018744 Mamiya et al. Jan 2000 A
6021415 Cannon et al. Feb 2000 A
6023710 Steiner et al. Feb 2000 A
6026414 Anglin Feb 2000 A
6026437 Muschett et al. Feb 2000 A
6052735 Ulrich et al. Apr 2000 A
6061671 Baker et al. May 2000 A
6064821 Shough et al. May 2000 A
6070228 Belknap et al. May 2000 A
6073128 Pongracz et al. Jun 2000 A
6073137 Brown et al. Jun 2000 A
6073220 Gunderson Jun 2000 A
6076148 Kedem et al. Jun 2000 A
6078934 Lahey et al. Jun 2000 A
6081883 Popelka et al. Jun 2000 A
6085030 Whitehead et al. Jul 2000 A
6088694 Burns et al. Jul 2000 A
6091518 Anabuki Jul 2000 A
6094416 Ying Jul 2000 A
6101585 Brown et al. Aug 2000 A
6105037 Kishi Aug 2000 A
6105129 Meier et al. Aug 2000 A
6108640 Slotznick Aug 2000 A
6108712 Hayes, Jr. Aug 2000 A
6112239 Kenner et al. Aug 2000 A
6122668 Teng et al. Sep 2000 A
6131095 Low et al. Oct 2000 A
6131190 Sidwell Oct 2000 A
6137864 Yaker Oct 2000 A
6148377 Carter et al. Nov 2000 A
6148412 Cannon et al. Nov 2000 A
6151590 Cordery et al. Nov 2000 A
6154787 Urevig et al. Nov 2000 A
6154852 Amundson et al. Nov 2000 A
6161111 Mutalik et al. Dec 2000 A
6161192 Lubbers et al. Dec 2000 A
6167402 Yeager Dec 2000 A
6175829 Li et al. Jan 2001 B1
6182198 Hubis et al. Jan 2001 B1
6189051 Oh et al. Feb 2001 B1
6212512 Barney et al. Apr 2001 B1
6212521 Minami et al. Apr 2001 B1
6223269 Blumenau Apr 2001 B1
6226759 Miller et al. May 2001 B1
6230164 Rikieta et al. May 2001 B1
6249795 Douglis Jun 2001 B1
6253217 Dourish et al. Jun 2001 B1
6260069 Anglin Jul 2001 B1
6263368 Martin Jul 2001 B1
6266679 Szalwinski et al. Jul 2001 B1
6266784 Hsiao et al. Jul 2001 B1
6269382 Cabrera et al. Jul 2001 B1
6269431 Dunham Jul 2001 B1
6275953 Vahalia et al. Aug 2001 B1
6292783 Rohler Sep 2001 B1
6295541 Bodnar Sep 2001 B1
6298439 Beglin Oct 2001 B1
6301592 Aoyama et al. Oct 2001 B1
6304880 Kishi Oct 2001 B1
6314439 Bates et al. Nov 2001 B1
6314460 Knight et al. Nov 2001 B1
6324581 Xu et al. Nov 2001 B1
6327590 Chidlovskii et al. Dec 2001 B1
6327612 Watanabe Dec 2001 B1
6328766 Long Dec 2001 B1
6330570 Crighton Dec 2001 B1
6330572 Sitka Dec 2001 B1
6330589 Kennedy Dec 2001 B1
6330642 Carteau Dec 2001 B1
6341287 Sziklai et al. Jan 2002 B1
6343287 Kumar et al. Jan 2002 B1
6343324 Hubis et al. Jan 2002 B1
6345288 Reed et al. Feb 2002 B1
6350199 Williams et al. Feb 2002 B1
6351763 Kawanaka Feb 2002 B1
6351764 Voticky et al. Feb 2002 B1
RE37601 Eastridge et al. Mar 2002 E
6353878 Dunham Mar 2002 B1
6356801 Goodman et al. Mar 2002 B1
6356863 Sayle Mar 2002 B1
6360306 Bergsten Mar 2002 B1
6363462 Bergsten Mar 2002 B1
6367029 Mayhead et al. Apr 2002 B1
6367073 Elledge Apr 2002 B2
6374336 Peters et al. Apr 2002 B1
6374363 Wu et al. Apr 2002 B1
6389432 Pothapragada et al. May 2002 B1
6389459 McDowell May 2002 B1
6396513 Helfman et al. May 2002 B1
6397308 Ofek et al. May 2002 B1
6418478 Ignatius et al. Jul 2002 B1
6421709 McCormick et al. Jul 2002 B1
6421711 Blumenau et al. Jul 2002 B1
6438595 Blumenau et al. Aug 2002 B1
6442600 Anderson Aug 2002 B1
6442706 Wahl et al. Aug 2002 B1
6453325 Cabrera et al. Sep 2002 B1
6466592 Chapman Oct 2002 B1
6466973 Jaffe Oct 2002 B2
6470332 Weschler Oct 2002 B1
6473794 Guheen et al. Oct 2002 B1
6484162 Edlund et al. Nov 2002 B1
6487561 Ofek et al. Nov 2002 B1
6487644 Huebsch et al. Nov 2002 B1
6493811 Blades et al. Dec 2002 B1
6502205 Yanai et al. Dec 2002 B1
6519679 Devireddy et al. Feb 2003 B2
6535910 Suzuki et al. Mar 2003 B1
6538669 Lagueux, Jr. et al. Mar 2003 B1
6540623 Jackson Apr 2003 B2
6542909 Tamer et al. Apr 2003 B1
6542972 Ignatius et al. Apr 2003 B2
6546545 Honarvar et al. Apr 2003 B1
6549918 Probert et al. Apr 2003 B1
6553410 Kikinis Apr 2003 B2
6557039 Leong et al. Apr 2003 B1
6564219 Lee et al. May 2003 B1
6564228 O'Connor May 2003 B1
6581143 Gagne et al. Jun 2003 B2
6593656 Ahn et al. Jul 2003 B2
6604149 Deo et al. Aug 2003 B1
6615241 Miller et al. Sep 2003 B1
6618771 Leja et al. Sep 2003 B1
6629110 Cane et al. Sep 2003 B2
6631477 LeCrone et al. Oct 2003 B1
6631493 Ottesen et al. Oct 2003 B2
6647396 Parnell et al. Nov 2003 B2
6647399 Zaremba Nov 2003 B2
6647409 Sherman et al. Nov 2003 B1
6654825 Clapp et al. Nov 2003 B2
6658436 Oshinsky et al. Dec 2003 B2
6658526 Nguyen et al. Dec 2003 B2
6675177 Webb Jan 2004 B1
6704933 Tanaka et al. Mar 2004 B1
6721767 De Meno et al. Apr 2004 B2
6721784 Leonard et al. Apr 2004 B1
6728733 Tokui Apr 2004 B2
6732088 Glance May 2004 B1
6732124 Koseki et al. May 2004 B1
6732231 Don et al. May 2004 B1
6732244 Ashton et al. May 2004 B2
6742092 Huebsch et al. May 2004 B1
6745178 Emens et al. Jun 2004 B1
6757794 Cabrera et al. Jun 2004 B2
6760723 Oshinsky et al. Jul 2004 B2
6763351 Subramaniam et al. Jul 2004 B1
6789161 Blendermann et al. Sep 2004 B1
6795828 Ricketts Sep 2004 B2
6816941 Carlson et al. Nov 2004 B1
6820070 Goldman et al. Nov 2004 B2
6839741 Tsai Jan 2005 B1
6839803 Loh et al. Jan 2005 B1
6850994 Gabryjelski et al. Feb 2005 B2
6860422 Hull et al. Mar 2005 B2
6865568 Chau Mar 2005 B2
6868424 Jones et al. Mar 2005 B2
6871163 Hiller et al. Mar 2005 B2
6871182 Winnard et al. Mar 2005 B1
6874023 Pennell et al. Mar 2005 B1
6886020 Zahavi et al. Apr 2005 B1
6892221 Ricart et al. May 2005 B2
6912627 Matsunami et al. Jun 2005 B2
6912645 Dorward et al. Jun 2005 B2
6941304 Gainey et al. Sep 2005 B2
6948038 Berkowitz et al. Sep 2005 B2
6952758 Chron et al. Oct 2005 B2
6957186 Guheen et al. Oct 2005 B1
6968351 Butterworth Nov 2005 B2
6970997 Shibayama et al. Nov 2005 B2
6973553 Archibald, Jr. et al. Dec 2005 B1
6976039 Chefalas et al. Dec 2005 B2
6978265 Schumacher Dec 2005 B2
6983351 Gibble et al. Jan 2006 B2
6995675 Curkendall et al. Feb 2006 B2
6996616 Leighton et al. Feb 2006 B1
7003519 Biettron et al. Feb 2006 B1
7003641 Prahlad et al. Feb 2006 B2
7028079 Mastrianni et al. Apr 2006 B2
7035880 Crescenti et al. Apr 2006 B1
7039860 Gautestad May 2006 B1
7058661 Ciaramitaro et al. Jun 2006 B2
7062761 Slavin et al. Jun 2006 B2
7076685 Pillai et al. Jul 2006 B2
7082441 Zahavi et al. Jul 2006 B1
7085904 Mizuno et al. Aug 2006 B2
7096315 Takeda et al. Aug 2006 B2
7099901 Sutoh et al. Aug 2006 B2
7103731 Gibble et al. Sep 2006 B2
7103740 Colgrove et al. Sep 2006 B1
7107298 Prahlad et al. Sep 2006 B2
7107395 Ofek et al. Sep 2006 B1
7107416 Stuart et al. Sep 2006 B2
7120757 Tsuge Oct 2006 B2
7130970 Devassy et al. Oct 2006 B2
7133870 Tripp et al. Nov 2006 B1
7134041 Murray et al. Nov 2006 B2
7139826 Watanabe et al. Nov 2006 B2
7146387 Russo et al. Dec 2006 B1
7149893 Leonard et al. Dec 2006 B1
7155421 Haidar Dec 2006 B1
7155465 Lee et al. Dec 2006 B2
7155481 Prahlad et al. Dec 2006 B2
7155633 Tuma et al. Dec 2006 B2
7159081 Suzuki Jan 2007 B2
7171468 Yeung et al. Jan 2007 B2
7171585 Gail et al. Jan 2007 B2
7174312 Harper et al. Feb 2007 B2
7188141 Novaes Mar 2007 B2
7194454 Hansen et al. Mar 2007 B2
7240100 Wein et al. Jul 2007 B1
7246140 Therrien et al. Jul 2007 B2
7246207 Kottomtharayil et al. Jul 2007 B2
7269612 Devarakonda et al. Sep 2007 B2
7269664 Hütsch et al. Sep 2007 B2
7272606 Borthakur et al. Sep 2007 B2
7278142 Bandhole et al. Oct 2007 B2
7287047 Kavuri Oct 2007 B2
7290017 Wang et al. Oct 2007 B1
7293133 Colgrove et al. Nov 2007 B1
7313659 Suzuki Dec 2007 B2
7315923 Retnamma et al. Jan 2008 B2
7315924 Prahlad et al. Jan 2008 B2
7325017 Tormasov et al. Jan 2008 B2
7328225 Beloussov et al. Feb 2008 B1
7328325 Solis et al. Feb 2008 B1
7343356 Prahlad et al. Mar 2008 B2
7343365 Farnham et al. Mar 2008 B2
7343453 Prahlad et al. Mar 2008 B2
7343459 Prahlad et al. Mar 2008 B2
7346623 Prahlad et al. Mar 2008 B2
7346676 Swildens et al. Mar 2008 B1
7346751 Prahlad et al. Mar 2008 B2
7356657 Mikami Apr 2008 B2
7359917 Winter et al. Apr 2008 B2
7376947 Evers May 2008 B2
7379978 Anderson et al. May 2008 B2
7380072 Kottomtharayil et al. May 2008 B2
7386535 Kalucha et al. Jun 2008 B1
7386552 Kitamura et al. Jun 2008 B2
7389311 Crescenti et al. Jun 2008 B1
7395282 Crescenti et al. Jul 2008 B1
7401338 Bowen Jul 2008 B1
7409509 Devassy et al. Aug 2008 B2
7424543 Rice, III Sep 2008 B2
7430587 Malone et al. Sep 2008 B2
7433301 Akahane et al. Oct 2008 B2
7434219 De Meno et al. Oct 2008 B2
7437445 Roytman et al. Oct 2008 B1
7447692 Oshinsky et al. Nov 2008 B2
7454569 Kavuri et al. Nov 2008 B2
7457790 Kochunni et al. Nov 2008 B2
7467167 Patterson Dec 2008 B2
7472142 Prahlad et al. Dec 2008 B2
7472238 Gokhale Dec 2008 B1
7484054 Kottomtharayil et al. Jan 2009 B2
7490207 Amarendran Feb 2009 B2
7496589 Jain et al. Feb 2009 B1
7496841 Hadfield et al. Feb 2009 B2
7500053 Kavuri et al. Mar 2009 B1
7500150 Sharma et al. Mar 2009 B2
7509316 Greenblatt et al. Mar 2009 B2
7512601 Cucerzan et al. Mar 2009 B2
7519726 Palliyll et al. Apr 2009 B2
7523483 Dogan Apr 2009 B2
7529748 Wen et al. May 2009 B2
7532340 Koppich et al. May 2009 B2
7536291 Retnamma et al. May 2009 B1
7543125 Gokhale Jun 2009 B2
7546324 Prahlad et al. Jun 2009 B2
7565484 Ghosal et al. Jul 2009 B2
7577689 Masinter et al. Aug 2009 B1
7577694 Nakano et al. Aug 2009 B2
7581077 Ignatius et al. Aug 2009 B2
7584469 Mitekura et al. Sep 2009 B2
7587715 Barrett et al. Sep 2009 B1
7593935 Sullivan Sep 2009 B2
7596586 Gokhale et al. Sep 2009 B2
7596713 Mani-Meitav Sep 2009 B2
7603626 Williams et al. Oct 2009 B2
7606844 Kottomtharayil Oct 2009 B2
7610285 Zoellner et al. Oct 2009 B1
7613748 Brockway et al. Nov 2009 B2
7617253 Prahlad et al. Nov 2009 B2
7617262 Prahlad et al. Nov 2009 B2
7617541 Plotkin et al. Nov 2009 B2
7627598 Burke Dec 2009 B1
7627617 Kavuri et al. Dec 2009 B2
7636743 Erofeev Dec 2009 B2
7651593 Prahlad et al. Jan 2010 B2
7661028 Erofeev Feb 2010 B2
7668798 Scanlon et al. Feb 2010 B2
7668884 Prahlad et al. Feb 2010 B2
7673175 Mora et al. Mar 2010 B2
7676542 Moser et al. Mar 2010 B2
7685126 Patel et al. Mar 2010 B2
7689899 Leymaster et al. Mar 2010 B2
7716171 Kryger May 2010 B2
7730031 Forster Jun 2010 B2
7734593 Prahlad et al. Jun 2010 B2
7734669 Kottomtharayil et al. Jun 2010 B2
7734715 Hyakutake et al. Jun 2010 B2
7751628 Reisman Jul 2010 B1
7757043 Kavuri et al. Jul 2010 B2
7792789 Prahlad et al. Sep 2010 B2
7801871 Gosnell Sep 2010 B2
7802067 Prahlad et al. Sep 2010 B2
7814118 Kottomtharayil et al. Oct 2010 B2
7827266 Gupta Nov 2010 B2
7831793 Chakravarty et al. Nov 2010 B2
7840537 Gokhale et al. Nov 2010 B2
7844676 Prahlad et al. Nov 2010 B2
7865517 Prahlad et al. Jan 2011 B2
7870355 Erofeev Jan 2011 B2
7873808 Stewart Jan 2011 B2
7877351 Crescenti et al. Jan 2011 B2
7882077 Gokhale et al. Feb 2011 B2
7882093 Kottomtharayil et al. Feb 2011 B2
7890718 Gokhale Feb 2011 B2
7890719 Gokhale Feb 2011 B2
7930481 Nagler et al. Apr 2011 B1
7937393 Prahlad et al. May 2011 B2
7937420 Tabellion et al. May 2011 B2
7937702 De Meno et al. May 2011 B2
7962455 Erofeev Jun 2011 B2
7984063 Kottomtharayil et al. Jul 2011 B2
8037028 Prahlad et al. Oct 2011 B2
8041673 Crescenti et al. Oct 2011 B2
8046331 Sanghavi et al. Oct 2011 B1
8055627 Prahlad et al. Nov 2011 B2
8060514 Arrouye et al. Nov 2011 B2
8078583 Prahlad et al. Dec 2011 B2
8086809 Prahlad et al. Dec 2011 B2
8103670 Oshinsky et al. Jan 2012 B2
8103829 Kavuri et al. Jan 2012 B2
8121983 Prahlad et al. Feb 2012 B2
8166263 Prahlad Apr 2012 B2
8204859 Ngo Jun 2012 B2
8214444 Prahlad et al. Jul 2012 B2
8219524 Gokhale Jul 2012 B2
8266106 Prahlad et al. Sep 2012 B2
8266397 Prahlad et al. Sep 2012 B2
8271830 Erofeev Sep 2012 B2
8352422 Prahlad et al. Jan 2013 B2
8352433 Crescenti et al. Jan 2013 B2
8402219 Kavuri et al. Mar 2013 B2
8433679 Crescenti et al. Apr 2013 B2
8504634 Prahlad et al. Aug 2013 B2
8566278 Crescenti et al. Oct 2013 B2
8577844 Prahlad et al. Nov 2013 B2
8725731 Oshinsky et al. May 2014 B2
8725964 Prahlad et al. May 2014 B2
8782064 Kottomtharayil Jul 2014 B2
8930319 Crescenti et al. Jan 2015 B2
9003117 Kavuri et al. Apr 2015 B2
9003137 Prahlad et al. Apr 2015 B2
9021198 Vijayan et al. Apr 2015 B1
9104340 Prahlad et al. Aug 2015 B2
9274803 De Meno et al. Mar 2016 B2
9286398 Oshinsky et al. Mar 2016 B2
9578101 Vijayan et al. Feb 2017 B2
20020004883 Nguyen et al. Jan 2002 A1
20020032878 Karpf Mar 2002 A1
20020040376 Yamanaka et al. Apr 2002 A1
20020042869 Tate et al. Apr 2002 A1
20020049626 Mathias et al. Apr 2002 A1
20020049778 Bell et al. Apr 2002 A1
20020049883 Schneider et al. Apr 2002 A1
20020069324 Gerasimov et al. Jun 2002 A1
20020099690 Schumacher Jul 2002 A1
20020103848 Giacomini et al. Aug 2002 A1
20020107877 Whiting et al. Aug 2002 A1
20020120858 Porter et al. Aug 2002 A1
20020161753 Inaba et al. Oct 2002 A1
20020161982 Riedel Oct 2002 A1
20020196744 O'Connor Dec 2002 A1
20030046313 Leung et al. Mar 2003 A1
20030050979 Takahashi Mar 2003 A1
20030055972 Fuller et al. Mar 2003 A1
20030061491 Jaskiewicz et al. Mar 2003 A1
20030097361 Huang et al. May 2003 A1
20030101086 San Miguel May 2003 A1
20030163399 Harper et al. Aug 2003 A1
20030172158 Pillai et al. Sep 2003 A1
20030204572 Mannen et al. Oct 2003 A1
20030212752 Thunquest et al. Nov 2003 A1
20040039689 Penney et al. Feb 2004 A1
20040107199 Dairymple et al. Jun 2004 A1
20040122938 Messick et al. Jun 2004 A1
20040133915 Moody, II et al. Jul 2004 A1
20040181476 Smith Sep 2004 A1
20040193953 Callahan et al. Sep 2004 A1
20040205206 Naik et al. Oct 2004 A1
20040215749 Tsao Oct 2004 A1
20040230829 Dogan et al. Nov 2004 A1
20040236868 Martin et al. Nov 2004 A1
20040250026 Tanoue Dec 2004 A1
20040267815 De Mes Dec 2004 A1
20050033800 Kavuri et al. Feb 2005 A1
20050039069 Prahlad et al. Feb 2005 A1
20050044114 Kottomtharayil et al. Feb 2005 A1
20050097070 Enis et al. May 2005 A1
20050146510 Ostergard Jul 2005 A1
20050246510 Retnamma et al. Nov 2005 A1
20050251635 Yoshinari et al. Nov 2005 A1
20050251786 Citron et al. Nov 2005 A1
20050268068 Ignatius et al. Dec 2005 A1
20050278207 Ronnewinkel Dec 2005 A1
20060005048 Osaki et al. Jan 2006 A1
20060010154 Prahlad et al. Jan 2006 A1
20060010227 Atluri Jan 2006 A1
20060036619 Fuerst et al. Feb 2006 A1
20060070061 Cox et al. Mar 2006 A1
20060115802 Reynolds Jun 2006 A1
20060116999 Dettinger et al. Jun 2006 A1
20060149604 Miller Jul 2006 A1
20060149724 Ritter et al. Jul 2006 A1
20060224846 Amarendran et al. Oct 2006 A1
20060282900 Johnson et al. Dec 2006 A1
20070022145 Kavuri Jan 2007 A1
20070028229 Knatcher Feb 2007 A1
20070043715 Kaushik et al. Feb 2007 A1
20070043956 El Far et al. Feb 2007 A1
20070061266 Moore et al. Mar 2007 A1
20070061298 Wilson et al. Mar 2007 A1
20070078913 Crescenti et al. Apr 2007 A1
20070100867 Celik et al. May 2007 A1
20070143756 Gokhale Jun 2007 A1
20070166674 Kochunni et al. Jul 2007 A1
20070174536 Nakagawa et al. Jul 2007 A1
20070183224 Erofeev Aug 2007 A1
20070250810 Tittizer et al. Oct 2007 A1
20070288536 Sen et al. Dec 2007 A1
20070296258 Calvert et al. Dec 2007 A1
20080028164 Ikemoto et al. Jan 2008 A1
20080059515 Fulton Mar 2008 A1
20080229037 Bunte et al. Sep 2008 A1
20080243855 Prahlad et al. Oct 2008 A1
20080243914 Prahlad et al. Oct 2008 A1
20080243957 Prahlad et al. Oct 2008 A1
20080243958 Prahlad et al. Oct 2008 A1
20080282048 Miura Nov 2008 A1
20080288947 Gokhale et al. Nov 2008 A1
20080288948 Attarde et al. Nov 2008 A1
20080320319 Muller et al. Dec 2008 A1
20090083484 Basham et al. Mar 2009 A1
20090150608 Innan Jun 2009 A1
20090171883 Kochunni et al. Jul 2009 A1
20090177719 Kavuri Jul 2009 A1
20090228894 Gokhale Sep 2009 A1
20090248762 Prahlad et al. Oct 2009 A1
20090271791 Gokhale Oct 2009 A1
20090319534 Gokhale Dec 2009 A1
20090319585 Gokhale Dec 2009 A1
20090320029 Kottomtharayil Dec 2009 A1
20090320033 Gokhale et al. Dec 2009 A1
20090320037 Gokhale et al. Dec 2009 A1
20100031017 Gokhale et al. Feb 2010 A1
20100049753 Prahlad et al. Feb 2010 A1
20100070466 Prahlad et al. Mar 2010 A1
20100070474 Lad Mar 2010 A1
20100070725 Prahlad et al. Mar 2010 A1
20100070726 Ngo et al. Mar 2010 A1
20100076932 Lad Mar 2010 A1
20100094808 Erofeev Apr 2010 A1
20100100529 Erofeev Apr 2010 A1
20100114837 Prahlad et al. May 2010 A1
20100122053 Prahlad et al. May 2010 A1
20100131461 Prahlad et al. May 2010 A1
20100145909 Ngo Jun 2010 A1
20100179941 Agrawal et al. Jul 2010 A1
20100205150 Prahlad et al. Aug 2010 A1
20110066817 Kavuri et al. Mar 2011 A1
20110072097 Prahlad et al. Mar 2011 A1
20110093471 Brockway et al. Apr 2011 A1
20110173207 Kottomtharayil et al. Jul 2011 A1
20110261686 Kotha Oct 2011 A1
20120059797 Prahlad et al. Mar 2012 A1
20120059800 Guo Mar 2012 A1
20120089800 Prahlad et al. Apr 2012 A1
20120124042 Oshinsky et al. May 2012 A1
20120166582 Binder Jun 2012 A1
20130326178 Crescenti et al. Dec 2013 A1
20150207883 Vijayan et al. Jul 2015 A1
20150378611 Prahlad et al. Dec 2015 A1
20160085468 Cresenti et al. Mar 2016 A1
20170118289 Vijayan et al. Apr 2017 A1
20180018099 Prahlad et al. Jan 2018 A1
20180337995 Vijayan et al. Nov 2018 A1
20200278792 Prahlad et al. Sep 2020 A1
Foreign Referenced Citations (40)
Number Date Country
0259912 Mar 1988 EP
0341230 Nov 1989 EP
0381651 Aug 1990 EP
0405926 Jan 1991 EP
0467546 Jan 1992 EP
0599466 Jun 1994 EP
0670543 Sep 1995 EP
0717346 Jun 1996 EP
0774715 May 1997 EP
0809184 Nov 1997 EP
0862304 Sep 1998 EP
0899662 Mar 1999 EP
0910019 Apr 1999 EP
0981090 Feb 2000 EP
0 986 011 Mar 2000 EP
0986011 Mar 2000 EP
1035690 Sep 2000 EP
1174795 Jan 2002 EP
2216368 Oct 1989 GB
1064178 Aug 2013 HK
07-046271 Feb 1995 JP
07-073080 Mar 1995 JP
08-044598 Feb 1996 JP
H11-102314 Apr 1999 JP
H11-259459 Sep 1999 JP
2000-035969 Feb 2000 JP
2001-60175 Mar 2001 JP
2003-531435 Oct 2003 JP
WO 9417474 Aug 1994 WO
WO 9513580 May 1995 WO
WO 9839707 Sep 1998 WO
WO 9912098 Mar 1999 WO
WO 9914692 Mar 1999 WO
WO 9923585 May 1999 WO
WO 0058865 Oct 2000 WO
WO 0104756 Jan 2001 WO
WO 0106368 Jan 2001 WO
WO 0116693 Mar 2001 WO
WO 0180005 Oct 2001 WO
WO 05050381 Jun 2005 WO
Non-Patent Literature Citations (49)
Entry
IBM, “Prioritized Storage in a SAN File System,” Apr. 28, 2006, pp. 1-4. (Year: 2006).
U.S. Appl. No. 13/038,614, filed Sep. 26, 2013, Prahlad, et al..
U.S. Appl. No. 13/787,583, filed Mar. 6, 2013, Kavuri, et al.
Armstead et al., “Implementation of a Campus-Wide Distributed Mass Storage Service: The Dream vs. Reality,” IEEE, 1995, pp. 190-199.
Arneson, “Development of Omniserver; Mass Storage Systems,” Control Data Corporation, 1990, pp. 88-93.
Arneson, “Mass Storage Archiving in Network Environments” IEEE, 1998, pp. 45-50.
Ashton, et al., “Two Decades of policy-based storage management for the IBM mainframe computer”, www.research.ibm.com, 19 pages, published Apr. 10, 2003, printed Jan. 3, 2009, www.research.ibm.com, Apr. 10, 2003, pp. 19.
Cabrera, et al. “ADSM: A Multi-Platform, Scalable, Back-up and Archive Mass Storage System,” Digest of Papers, Compcon '95, Proceedings of the 40th IEEE Computer Society International Conference, Mar. 5, 1995-Mar. 9, 1995, pp. 420-427, San Francisco, CA.
Catapult, Inc., Microsoft Outlook 2000 Step by Step, Published May 7, 1999, “Collaborating with Others Using Outlook & Exchange”, p. 8 including “Message Timeline.”
Eitel, “Backup and Storage Management in Distributed Heterogeneous Environments,” IEEE, 1994, pp. 124-126.
Gait, “The Optical File Cabinet: A Random-Access File system for Write-Once Optical Disks,” IEEE Computer, vol. 21, No. 6, pp. 11-22 (1988).
http://en.wikipedia.org/wiki/Naive_Bayes_classifier, printed on Jun. 1, 2010, in 7 pages.
Hsiao, et al., “Using a Multiple Storage Quad Tree on a Hierarchal VLSI Compaction Scheme”, IEEE, 1990, pp. 1-15.
Jander, “Launching Storage-Area Net,” Data Communications, US, McGraw Hill, NY, vol. 27, No. 4(Mar. 21, 1998), pp. 64-72.
Microsoft, about using Microsoft Excel 2000 files with earlier version Excel, 1985-1999, Microsoft, p. 1.
Microsoft Press Computer Dictionary Third Edition, “Data Compression,” Microsoft Press, 1997, p. 130.
Pitoura al., “Locating Objects in Mobile Computing”, IEEE Transactions on Knowledge and Data Engineering, vol. 13, No. 4, Jul.-Aug. 2001, pp. 571-592.
Rosenblum et al., “The Design and Implementation of a Log-Structure File System,” Operating Systems Review SIGOPS, vol. 25, No. 5, New York, US, pp. 1-15 (May 1991).
Rowe et al., “Indexes for User Access to Large Video Databases”, Storage and Retrieval for Image and Video Databases II, IS,& T/SPIE Symp. On Elec. Imaging Sci. & Tech., Feb. 1994, pp. 1-12.
Swift et al., “Improving the Reliability of Commodity Operating Systems” ACM 2003.
Szor, The Art of Virus Research and Defense, Symantec Press (2005) ISBN 0-321-30454-3, Part 1.
Szor, The Art of Virus Research and Defense, Symantec Press (2005) ISBN 0-321-30454-3, Part 2.
Toyoda, Fundamentals of Oracle 8i Backup and Recovery, DB Magazine, Japan, Shoeisha, Co., Ltd.; Jul. 2000; vol. 10, No. 4, 34 total pages.
Veeravalli, B., “Network Caching Strategies for a Shared Data Distribution for a Predefined Service Demand Sequence,” IEEE Transactions on Knowledge and Data Engineering, vol. 15, No. 6, Nov.-Dec. 2003, pp. 1487-1497.
Weatherspoon H. et al., “Silverback: A Global-Scale Archival System,” Mar. 2001, pp. 1-15.
Witten et al., Data Mining: Practical Machine Learning Tools and Techniques, Ian H. Witten & Eibe Frank, Elsevier (2005) ISBN 0-12-088407-0, Part 1.
Witten et al., Data Mining: Practical Machine Learning Tools and Techniques, Ian H. Witten & Eibe Frank, Elsevier (2005) ISBN 0-12-088407-0, Part 2.
Supplementary European Search Report, European Patent Application No. 02747883, dated Sep. 15, 2006; 2 pages.
Communication in European Application No. 02 747 883.3, dated Jul. 20, 2007).
Japanese Office Action dated Jul. 15, 2008, Application No. 2003/502696.
International Search Report dated Aug. 22, 2002, PCT/US2002/017973.
International Search Report dated Dec. 23, 2003, PCT/US2001/003088.
European Examination Report, Application No. 01906806.3-1244, dated Sep. 13, 2006, 3 pages.
European Communication, Application No. 01906806.3, dated Sep. 21, 2010, 6 pages.
Office Action in European Application No. 02747883.3 dated Jul. 7, 2014.
International Search Report and Preliminary Report on Patentability dated Feb. 21, 2002, PCT/US2001/003183.
European Office Action dated Mar. 26, 2008, EP019068337.
International Search Report and Preliminary Report on Patentability dated Sep. 29, 2001, PCT/US2001/003209.
International Search Report and Preliminary Report on Patentability dated Mar. 3, 2003, PCT/US2002/018169.
Supplementary European Search Report dated Sep. 21, 2006, EP02778952.8.
Translation of Japanese Office Action dated Mar. 25, 2008, Application No. 2003-504235.
European Office Action dated Apr. 22, 2008, EP02778952.8.
International Preliminary Report on Patentability dated May 15, 2006, PCT/US2004/038278 filed Nov. 15, 2004, (Publication No. WO2005/050381).
International Search Report, PCT/US2004/03827, dated Feb. 1, 2006.
International Preliminary Reporton Patentability, PCT/US2004/038278, dated May 15, 2006.
International Search Report and Preliminary Report on Patentability dated May 4, 2001, PCT/US2000/019363.
International Search Report dated Dec. 21, 2000, PCT/US2000/019324.
International Search Reporton Patentability dated Dec. 21, 2000 in PCT/US00/19364 filed Nov. 14, 2000 (Publication No. WO01/04756).
International Search Report dated Dec. 21, 2000, PCT/US2000/019329.
Related Publications (1)
Number Date Country
20200259899 A1 Aug 2020 US
Divisions (1)
Number Date Country
Parent 13010694 Jan 2011 US
Child 14675418 US
Continuations (5)
Number Date Country
Parent 16664298 Oct 2019 US
Child 16789848 US
Parent 15952024 Apr 2018 US
Child 16664298 US
Parent 15400687 Jan 2017 US
Child 15952024 US
Parent 14929130 Oct 2015 US
Child 15400687 US
Parent 14675418 Mar 2015 US
Child 14929130 US