The present invention relates to computer processors, and more specifically, to utilizing a plurality of prioritized queues on computer processors.
A storage system is a computer that provides storage service relating to the organization of information on writable persistent storage devices, such as memories, tapes, disks or solid state devices, e.g., flash memory, etc. The storage system is commonly deployed within a storage area network (SAN) or a network attached storage (NAS) environment. When used within a NAS environment, the storage system may be embodied as a file server including an operating system that implements a file system to logically organize the information as a hierarchical structure of data containers, such as files on, e.g., the disks. Each “on-disk” file may be implemented as a set of data structures, e.g., disk blocks, configured to store information, such as the actual data (i.e., file data) for the file.
A network environment may be provided wherein information (data) is stored in secure storage served by one or more storage systems coupled to one or more security appliances. Each security appliance is configured to transform unencrypted data (cleartext) generated by clients (or initiators) into encrypted data (ciphertext) destined for secure storage or “cryptainers” on the storage system (or target). As used herein, a cryptainer is a piece of storage on a storage device, such as a disk, in which the encrypted data is stored. In the context of a SAN environment, a cryptainer can be, e.g., a disk, a region on the disk or several regions on one or more disks that, in the context of a SAN protocol, is accessible as a logical unit (lun). In the context of a NAS environment, the cryptainer may be a collection of files on one or more disks. Specifically, in the context of the CIFS protocol, the cryptainer may be a share, while in the context of the NFS protocol, the cryptainer may be a mount point. In a tape environment, the cryptainer may be a tape containing a plurality of tape blocks.
Each cryptainer is associated with its own encryption key, e.g., a cryptainer key, which is used by the security appliance to encrypt and decrypt the data stored on the cryptainer. An encryption key is a code or number which, when taken together with an encryption algorithm, defines a unique transformation used to encrypt or decrypt data. Data remains encrypted while stored in a cryptainer until requested by an authorized client. At that time, the security appliance retrieves the encrypted data from the cryptainer, decrypts it and forwards the unencrypted data to the client.
One noted disadvantage that may arise during use of a security appliance is that certain operations may be long running and may generate a backlog within a processor of the security appliance. For example, execution of performing compression/decompression operations on, e.g., a tape data stream, by the processor may require significant amounts of time. Conversely, execution of single block encryption/decryption operations for data access requests directed to a disk drive may proceed rapidly. However, should a long-running tape compression/decompression operation be loaded onto an operations queue associated with the processor before a block-based encryption/decryption operation, execution of the encryption/decryption operation by the processor may have to wait until such time as the long-running operation completes. This may substantially lower overall throughput and reduce system performance.
The disadvantages of the prior art are overcome by providing a system and method for utilizing prioritized queues on a computer, such as a security appliance or a second storage system. Illustratively, a plurality of queues is organized on the computer to enable long-running operations to be loaded on (directed to) a long running operation queue, while faster, “short-running” operations are directed to a short running operation queue. The queues may be associated with one or more processors (e.g., processor cores) of the computer to thereby enable improved throughput. When an operation request (e.g., a tape compression operation, an encryption operation, a disk compression operation, etc.) is received at a processor intake of the computer, a determination is made whether the operation contained within the received request is a long-running operation, e.g., a tape compression operation. If so, the operation is placed in the long-running operation queue. The processor core that is associated with the long-running operation queue thereafter removes the operation from the queue and executes the operation. The status of the operation, e.g., operation complete, an error code, etc., is then loaded onto an outgoing long-running operation status queue. The status may subsequently be removed and reported to an initiator of the long-running operation.
Similarly, if a determination is made that the received operation is not a long-running operation, e.g., a compression operation, the operation is placed in a non-long running operation queue. The processor core associated with the short-running operation is queue then removes the operation from the queue and processes the operation. Status information relating to that operation is then loaded onto a short-running status queue. The status may be subsequently removed from the queue and reported to the initiator of the operation. By utilizing a plurality of queues directed to different priorities of operation, overall system throughput may be increased by, among other things, reducing the number of short-running operations that are delayed.
The above and further advantages of the invention may be better understood by referring to the following description in conjunction with the accompanying drawings in which like reference numerals indicate identical or functionally similar elements:
The present invention provides a system and method for prioritized queues. Illustratively, a plurality of queues are organized to enable long-running operations to be directed to a long running queue operation, while faster operations are directed to a non-long running operation queue. Queues may be associated with one or more of a plurality of processor cores to therefore enable improved throughput operations. When an operation request is received, a determination is made whether it is a long-running operation, e.g., a tape compression operation. If so, the operation is placed in a long-running operation queue. When the processor core that is executing long-running operations is ready for the next operation, it removes an operation from the long-running operation queue and processes the operation. The status of the operation is then placed in an outgoing long-running operation status queue. The status may then be removed and reported to the initiator of the long-running operation.
Similarly, if a determination is made that the received operation is not a long-running operation, e.g., a compression operation, the operation is placed in a non-long running operation queue. The processor core executing non-long-running operations then removes the operation from the queue and processes the operation. Status information relating to the operation is then placed in a non-long-running status queue. The status may then be removed from the queue and reported back to the initiator of the operation. By utilizing a plurality of queues directed to different priorities of operation, overall system throughput may be increased and the number of non-long-running operations that are delayed may be reduced.
A. Security Appliance Environment
In the illustrative embodiment, the security appliance employs a conventional encryption algorithm, e.g., the Advanced Encryption Standard (AES) or other appropriate algorithms, to transform unencrypted data (cleartext) generated by the clients 102 into encrypted data (ciphertext) intended for secure storage, i.e., one or more cryptainers, on the storage system 110. To that end, the security appliance illustratively uses a high-quality, software or hardware-based pseudo random number generation technique to generate encryption keys. The encryption and decryption operations are performed using these encryptions keys, such as a cryptainer key associated with each cryptainer. As described herein, the security appliance 200 uses an appropriate cryptainer key to encrypt or decrypt portions of data stored in a particular cryptainer. In addition to performing encryption and decryption operations, the security appliance 200 also performs access control, authentication, virtualization, and secure-logging operations.
Illustratively, the clients 102 may comprise application service providers, virtual tape systems, etc. Thus, in accordance with an illustrative embodiment of the present invention, clients 102 may send a plurality of types of operations to the security appliance 200. For example, a client may send one or more block-based encryption/decryption operations directed to a logical unit number (lun) or may transmit one or more compression/decompression operations directed to a virtual tape stream.
B. Security Appliance
In accordance with the illustrative embodiment of the present invention, the SEP 270 includes a plurality of processor cores 275 A, B. It should be noted that two cores are shown for illustrative purposes only. In accordance with alternative embodiments of the present invention, the SEP 270 may have any number of the processor cores including, for example, a single processor core. As such, the depiction of the SEP 270 having two processor cores 275 A, B should be taken as exemplary only. Furthermore, while one SEP 270 is shown in
Since the SEP 270 protects encryption keys from being “touched” (processed) by the system software executing on the CPU 202, a mechanism is needed to load keys into and retrieve keys from the SEP. To that end, the card reader 230 provides an interface between a “smart” system card 250 and the SEP 270 for purposes of exchanging encryption keys. Illustratively, the system card is a FIPS 140-2 level-3 certified card that is configured with customized software code. The security appliance (and card reader 230) are further configured to support additional smart cards referred to as recovery cards 260a,b. The security appliance illustratively supports up to 40 recovery cards with a default value of, e.g., 5 recovery cards, although any number of cards can be supported based on the particular security policy.
Operationally, encryption keys are exchanged between the SEP 270 and system card 250, where they are “secret shared” (cryptographically assigned) to the recovery cards 260 as recovery keys, as described herein. These recovery keys can thereafter be applied (via the recovery cards) to the security appliance 200 to enable restoration of other encryption keys (such as cryptainer keys). A quorum setting for the recovery cards 260 may be provided such that the recovery keys stored on the recovery cards are backed up in a threshold scheme whereby, e.g., any 2 of the 5 default cards can recover the keys.
The network adapters 220 couple the security appliance 200 between one or more clients 102 and one or more storage systems 110 over point-to-point links, wide area networks and virtual private networks implemented over a public network (Internet) or shared local area networks. In a SAN environment configured to support various Small Computer Systems Interface (SCSI)-based data access protocols, including SCSI encapsulated over TCP (iSCSI) and SCSI encapsulated over FC (FCP), the network adapters 220 may comprise host bus adapters (HBAs) having the mechanical, electrical and signaling circuitry needed to connect the appliance 200 to, e.g., a FC network. In a NAS environment configured to support, e.g., the conventional Common Internet File System (CIFS) and the Network File System (NFS) data access protocols, the network adapters 220 may comprise network interface cards (NICs) having the mechanical, electrical and signaling circuitry needed to connect the appliance to, e.g., an Ethernet network.
The memory 210 illustratively comprises storage locations that are addressable by the processors and adapters for storing software programs and data structures associated with the present invention. The processor and adapters may, in turn, comprise processing elements and/or logic circuitry configured to execute the software programs and manipulate the data structures. An operating system 212, portions of which is typically resident in memory and executed by the processing elements, functionally organizes the appliance 200 by, inter alia, invoking security operations in support of software processes and/or modules implemented by the appliance. It will be apparent to those skilled in the art that is other processing and memory means, including various computer readable media, may be used for storing and executing program instructions pertaining to the invention described herein.
The operating system 212 illustratively organizes the memory 210 into an address space arrangement available to the software processes and modules executing on the processors.
For both NAS and SAN environments, data is received at a proxy 320 of the security appliance. The proxy 320 is a kernel module embodied as, e.g., the network protocol stack configured to interpret the protocol over which data is received and to enforce certain access control rules based on one or more policies. Each policy is served by a box manager 360 that is illustratively embodied as a database application process configured to manage a configuration repository or database (Config DB 370) that stores permissions, access control lists (ACLs), system-wide settings and encrypted keys. A socket server 380 provides interfaces to the box manager 360, including (i) an HTTP web interface 382 embodied as, e.g., a graphical user interface (GUI) adapted for web-based administration, (ii) a SSH interface 384 for command line interface (CLI) command administration, and (iii) an SNMP interface 386 for remote management and monitoring.
Specifically, the box manager 360 supplies the permissions and encrypted keys to the proxy 320, which intercepts data access requests and identifies the sources (clients 102) of those requests, as well as the types of requests and the storage targets (cryptainers) of those requests. The proxy also queries, using, e.g., an interprocess communication (IPC) technique, the box manager for permissions associated with each client and, in response, the box manager 360 supplies the appropriate permissions and encrypted key (e.g., a cryptainer key). The proxy 320 then bundles the data together with the encrypted key and forwards that information to a crypto process (layer) 330 that functions as a “wrapper” for the SEP 270. As noted, the SEP resides on an interface card, which is hereinafter referred to a data crypto card (DCC 340).
Illustratively the DCC 340 cooperates with the crypto layer 330 to implement a plurality of prioritize queues, such as operation queues 400 and status queues 500 in accordance with an illustrative embodiment of the present invention. Each operation queue 400 is utilized by the crypto layer 330 to access the DCC 340 by, for example, supplying starting and ending points of data as well as offsets into the data along with the encryption keys used to encrypt data. In accordance with the illustrative embodiment of the present invention, the DCC 340 removes operations from the queue 400 and processes them before placing status indicators in status queue 500. The crypto layer 330 may retrieve status information, e.g., operation complete, error codes, etc., from queue 500 and return the status information to the appropriate initiator of the operation. In an illustrative embodiment, operation requests are received by the crypto layer 330 and enqueued in an operations queue 400 before processing by one of the cores of the SEP 270. The crypto layer or, in alternative embodiments, the DCC 340 determines whether the received operation request is a long running operation. If the operation contained in the received request is a long running operation, e.g., a compression operation, the operation is enqueued in a long running operation queue 400. Otherwise, the operation is enqueued in a short running operation queue 400. In accordance with alternative embodiments, there may be a plurality of long and short running operation queues (and associated status queues 500). Each of the queues may be associated with one or more processor cores in a predefined manner, established by, e.g., the DCC 340, to enable optimized processing of operations. In alternative embodiments, the association of individual queues with specific cores may dynamically change depending on the type of operation mix being processed. It should be noted that in alternative embodiments, queues 400, 500 may be implemented in modules other than DCC 340, e.g. queues 400, 500 may be implemented in crypto layer 330. As such, the description of queues being implemented by DCC 340 should be taken as exemplary only. Furthermore, the method of associating processor cores with queues may vary as will be appreciated by one skilled in the art. Thus, in the illustrative embodiment, the decision as to which processor core 275 an operation is directed is made by the software executing on processor 202. However, in alternative embodiments, this decision may be performed by other modules. As such, this description should be taken as exemplary only.
The crypto layer 330 interacts with the DCC 340 by accessing (reading and writing) registers on the DCC and, to that end, functions as a PCI interface. The DCC 340 includes one or more previously loaded keys used to decrypt the supplied encrypted keys; upon decrypting an encrypted key, the DCC uses the decrypted key to encrypt the supplied data. Upon completion of encryption of the data, the DCC returns the encrypted data as ciphertext to the proxy 320, which forwards the encrypted data to the storage system 110.
Notably, the security appliance 200 “virtualizes” storage such that, to a client 102, the appliance appears as a storage system 110 whereas, from the perspective of the storage system, the security appliance appears as a client. Such virtualization requires that security appliance manipulate network addresses, e.g., IP addresses, with respect to data access requests and responses. Illustratively, certain of the customizations to the network protocol stack of the proxy 320 involve virtualization optimizations provided by the appliance. For example, the security appliance 200 manipulates (changes) the source and destination IP addresses of the data access requests and responses.
C. Prioritized Queues
The present invention provides a system and method for prioritized queues. Illustratively, a plurality of queues are organized to enable long-running operations to be directed to a long running queue operation, while faster operations are directed to a non-long running operation queue. Queues may be associated with one or more of a plurality of processor cores to therefore enable improved throughput operations. When an operation request is received, a determination is made whether it is a long-running operation, e.g., a tape compression operation. If so, the operation is placed in a long-running operation queue. When the processor core that is executing long-running operations is ready for the next operation, it removes an operation from the long-running operation queue and processes the operation. The status of the operation is then placed in an outgoing long-running operation status queue. The status may then be removed and reported to the is initiator of the long-running operation.
Similarly, if a determination is made that the received operation is not a long-running operation, e.g., a compression operation, the operation is placed in a non-long running operation queue. The processor core executing non-long-running operations then removes the operation from the queue and processes the operation. Status information relating to the operation is then placed in a non-long-running status queue. The status may then be removed from the queue and reported back to the initiator of the operation. By utilizing a plurality of queues directed to different priorities of operation, overall system throughput may be increased and the number of non-long-running operations that are delayed may be reduced.
If the operation received is a long-running operation, the procedure continues to step 620 where the received operation is placed in a long-running operation queue. At a later point in time, the operation is removed from the long-running operation queue and processed by one or more cores of the SEP in step 625. The status of the operation is then placed on a long-running status queue in step 630. The status is then removed from the long-running status queue and reported to the initiator in step 635. The procedure 600 then completes in step 640.
However, if in step 615 it is determined that the operation is not a long-running operation, then the procedure branches to step 645 where the received operation is placed on a non-long-running (i.e., short-running) operation queue. A SEP core removes the operation from the queue and processes the operation in step 650. The status of the processed operation is then placed on a non-long running status queue in step 655. The status is then removed from the queue and reported to the initiator in step 660 before the procedure completes in step 640.
To again summarize, the present invention enables a plurality of operation queues to be configured in a defined system arrangement with one or more processor cores. Upon receiving an operation request, the system enqueues the operation onto one of the queues based upon one or more characteristics of the operation. Illustratively, the characteristic is whether the operation is a long running operation. However, it should be noted that in alternative embodiments, additional and/or differing characteristics may be utilized. Once enqueued, the operation is subsequently processed by one of the processor cores that is illustratively configured to process (execute) operations having a certain characteristic. In alternative embodiments, the association of processor cores and queues may be dynamically modified depending on, e.g., the operation types and quantities that are being received by the system.
The foregoing description has been directed to specific embodiments of this invention. It will be apparent; however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advantages. For instance, it is expressly contemplated that the procedures, processes, layers and/or modules described herein may be implemented in hardware, software, embodied as a computer-readable medium having executable program instructions, firmware, or a combination thereof. Accordingly this description is to be taken only by way of example and not to otherwise limit the scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
3876978 | Bossen et al. | Apr 1975 | A |
4092732 | Ouchi | May 1978 | A |
4201976 | Patel | May 1980 | A |
4205324 | Patel | May 1980 | A |
4365332 | Rice | Dec 1982 | A |
4375100 | Tsuji et al. | Feb 1983 | A |
4467421 | White | Aug 1984 | A |
4517663 | Imazeki et al. | May 1985 | A |
4667326 | Young et al. | May 1987 | A |
4688221 | Nakamura et al. | Aug 1987 | A |
4722085 | Flora et al. | Jan 1988 | A |
4755978 | Takizawa et al. | Jul 1988 | A |
4761785 | Clark et al. | Aug 1988 | A |
4775978 | Hartness | Oct 1988 | A |
4796260 | Schilling et al. | Jan 1989 | A |
4817035 | Timsit | Mar 1989 | A |
4825403 | Gershenson et al. | Apr 1989 | A |
4837680 | Crockett et al. | Jun 1989 | A |
4847842 | Schilling | Jul 1989 | A |
4849929 | Timsit | Jul 1989 | A |
4849974 | Schilling et al. | Jul 1989 | A |
4849976 | Schilling et al. | Jul 1989 | A |
4870643 | Bultman et al. | Sep 1989 | A |
4899342 | Potter et al. | Feb 1990 | A |
4989205 | Dunphy, Jr. et al. | Jan 1991 | A |
4989206 | Dunphy, Jr. et al. | Jan 1991 | A |
5022080 | Durst et al. | Jun 1991 | A |
5077736 | Dunphy, Jr. et al. | Dec 1991 | A |
5088081 | Farr | Feb 1992 | A |
5101492 | Schultz et al. | Mar 1992 | A |
5128810 | Halford | Jul 1992 | A |
5148432 | Gordon et al. | Sep 1992 | A |
RE34100 | Hartness | Oct 1992 | E |
5163131 | Row et al. | Nov 1992 | A |
5166936 | Ewert et al. | Nov 1992 | A |
5179704 | Jibbe et al. | Jan 1993 | A |
5200999 | Matyas et al. | Apr 1993 | A |
5202979 | Hillis et al. | Apr 1993 | A |
5208813 | Stallmo | May 1993 | A |
5210860 | Pfeffer et al. | May 1993 | A |
5218689 | Hotle | Jun 1993 | A |
5233618 | Glider et al. | Aug 1993 | A |
5235601 | Stallmo et al. | Aug 1993 | A |
5237658 | Walker et al. | Aug 1993 | A |
5257367 | Goodlander et al. | Oct 1993 | A |
5274799 | Brant et al. | Dec 1993 | A |
5305326 | Solomon et al. | Apr 1994 | A |
5313626 | Jones et al. | May 1994 | A |
5319710 | Atalla et al. | Jun 1994 | A |
5351246 | Blaum et al. | Sep 1994 | A |
5355453 | Row et al. | Oct 1994 | A |
5386524 | Lary et al. | Jan 1995 | A |
5398283 | Virga | Mar 1995 | A |
5410667 | Belsan et al. | Apr 1995 | A |
5485579 | Hitz et al. | Jan 1996 | A |
5504861 | Crockett et al. | Apr 1996 | A |
5537567 | Galbraith et al. | Jul 1996 | A |
5579475 | Blaum et al. | Nov 1996 | A |
5592618 | Micka et al. | Jan 1997 | A |
5623595 | Bailey | Apr 1997 | A |
5657440 | Micka et al. | Aug 1997 | A |
5682513 | Candelaria et al. | Oct 1997 | A |
5778206 | Pain et al. | Jul 1998 | A |
5802366 | Row et al. | Sep 1998 | A |
5805788 | Johnson | Sep 1998 | A |
5812753 | Chiariotti | Sep 1998 | A |
5815693 | McDermott et al. | Sep 1998 | A |
5819292 | Hitz et al. | Oct 1998 | A |
5852664 | Iverson et al. | Dec 1998 | A |
5862158 | Baylor et al. | Jan 1999 | A |
5884098 | Mason, Jr. | Mar 1999 | A |
5918001 | Ueno et al. | Jun 1999 | A |
5931918 | Row et al. | Aug 1999 | A |
5941972 | Hoese et al. | Aug 1999 | A |
5963962 | Hitz et al. | Oct 1999 | A |
5974544 | Jeffries et al. | Oct 1999 | A |
6012839 | Nguyen et al. | Jan 2000 | A |
6032253 | Cashman et al. | Feb 2000 | A |
6038570 | Hitz et al. | Mar 2000 | A |
6065027 | Cashman et al. | May 2000 | A |
6065037 | Hitz et al. | May 2000 | A |
6092215 | Hodges et al. | Jul 2000 | A |
6138125 | DeMoss | Oct 2000 | A |
6138201 | Rebalski | Oct 2000 | A |
6144999 | Khalidi et al. | Nov 2000 | A |
6157955 | Narad et al. | Dec 2000 | A |
6158017 | Han et al. | Dec 2000 | A |
6172990 | Deb et al. | Jan 2001 | B1 |
6175915 | Cashman et al. | Jan 2001 | B1 |
6192491 | Cashman et al. | Feb 2001 | B1 |
6205487 | Cashman et al. | Mar 2001 | B1 |
6209087 | Cashman et al. | Mar 2001 | B1 |
6212569 | Cashman et al. | Apr 2001 | B1 |
6223300 | Gotoh | Apr 2001 | B1 |
6233108 | Inoue | May 2001 | B1 |
6282670 | Rezaul Islam et al. | Aug 2001 | B1 |
6356999 | Cashman et al. | Mar 2002 | B1 |
6425035 | Hoese et al. | Jul 2002 | B2 |
6434711 | Takiyanagi | Aug 2002 | B1 |
6438678 | Cashman et al. | Aug 2002 | B1 |
6442711 | Sasamoto et al. | Aug 2002 | B1 |
6467060 | Malakapalli et al. | Oct 2002 | B1 |
6502205 | Yanai et al. | Dec 2002 | B1 |
6519733 | Har et al. | Feb 2003 | B1 |
6532548 | Hughes | Mar 2003 | B1 |
6581185 | Hughes | Jun 2003 | B1 |
6598086 | Bell et al. | Jul 2003 | B1 |
6654889 | Trimberger | Nov 2003 | B1 |
6711693 | Golden et al. | Mar 2004 | B1 |
6854071 | King et al. | Feb 2005 | B2 |
6920154 | Achler | Jul 2005 | B1 |
6950966 | Chiquoine et al. | Sep 2005 | B2 |
6983353 | Tamer et al. | Jan 2006 | B2 |
6985499 | Elliot | Jan 2006 | B2 |
6993701 | Corbett et al. | Jan 2006 | B2 |
7020160 | Achler | Mar 2006 | B1 |
7024584 | Boyd et al. | Apr 2006 | B2 |
7055057 | Achiwa | May 2006 | B2 |
7089391 | Geiger et al. | Aug 2006 | B2 |
7152077 | Veitch et al. | Dec 2006 | B2 |
7180909 | Achler | Feb 2007 | B1 |
7203732 | McCabe et al. | Apr 2007 | B2 |
7246203 | Moat et al. | Jul 2007 | B2 |
7269713 | Anderson et al. | Sep 2007 | B2 |
7278049 | Bartfai et al. | Oct 2007 | B2 |
7324547 | Alfieri et al. | Jan 2008 | B1 |
7343460 | Poston | Mar 2008 | B2 |
7362772 | Alfieri et al. | Apr 2008 | B1 |
7380081 | Ji et al. | May 2008 | B2 |
RE40405 | Schwartz et al. | Jun 2008 | E |
7397797 | Alfieri et al. | Jul 2008 | B2 |
7418368 | Kim et al. | Aug 2008 | B2 |
7467168 | Kern et al. | Dec 2008 | B2 |
7467265 | Tawri et al. | Dec 2008 | B1 |
7475207 | Bromling et al. | Jan 2009 | B2 |
7529885 | Kimura et al. | May 2009 | B2 |
7539976 | Ousterhout et al. | May 2009 | B1 |
7546469 | Suzuki et al. | Jun 2009 | B2 |
7571268 | Kern et al. | Aug 2009 | B2 |
7581064 | Zedlewski et al. | Aug 2009 | B1 |
7624109 | Testardi | Nov 2009 | B2 |
7720801 | Chen | May 2010 | B2 |
8015427 | Miller et al. | Sep 2011 | B2 |
8196147 | Srinivasan | Jun 2012 | B1 |
8621184 | Radhakrishnan et al. | Dec 2013 | B1 |
20020048364 | Gligor et al. | Apr 2002 | A1 |
20020091914 | Merchant et al. | Jul 2002 | A1 |
20030093623 | Crook et al. | May 2003 | A1 |
20030204759 | Singh | Oct 2003 | A1 |
20050050115 | Kekre | Mar 2005 | A1 |
20050154786 | Shackelford | Jul 2005 | A1 |
20060006918 | Saint-Laurent | Jan 2006 | A1 |
20060015507 | Butterworth et al. | Jan 2006 | A1 |
20060039465 | Emerson et al. | Feb 2006 | A1 |
20070079079 | Li et al. | Apr 2007 | A1 |
20070156963 | Chen et al. | Jul 2007 | A1 |
20070165549 | Surek et al. | Jul 2007 | A1 |
20070174411 | Brokenshire et al. | Jul 2007 | A1 |
20080005357 | Malkhi et al. | Jan 2008 | A1 |
20080104325 | Narad et al. | May 2008 | A1 |
20080162594 | Poston | Jul 2008 | A1 |
20080243951 | Webman et al. | Oct 2008 | A1 |
20080243952 | Webman et al. | Oct 2008 | A1 |
20080288646 | Hasha et al. | Nov 2008 | A1 |
20090327818 | Kogelnik | Dec 2009 | A1 |
20100070730 | Pop et al. | Mar 2010 | A1 |
20140109101 | Radhakrishnan et al. | Apr 2014 | A1 |
Number | Date | Country |
---|---|---|
1617330 | Jan 2006 | EP |
Entry |
---|
Bultman, David L., High Performance SCSI Using Parallel Drive Technology, In Proc. BUSCON Conf., pp. 40-44, Anaheim, CA, Feb. 1988. |
Gibson, Garth A., et al., Coding Techniques for Handling Failures in Large Disk Arrays, Technical Report UCB/CSD 88/477, Computer Science Division, University of California, Jul., 1988. |
Gibson, Garth A., et al., Failure Correction Techniques for Large Disk Arrays, In Proceedings Architectural Support for Programming Languages and Operating Systems, Boston, Apr. 1989, pp. 123-132. |
Gibson, Garth A., et al., Strategic Directions in Storage I/O Issues in Large-Scale Computing, ACM Computing Survey, 28(4):779-93, Dec. 1996. |
Hitz, Dave et al., File System Design for an NFS File Server Appliance, Technical Report 3002, Rev. C395, presented Jan. 19, 1994, 23 pages. |
Katz, Randy H. et al., Disk System Architectures for High Performance Computing, Proceedings of the IEEE, vol. 77, No. 12, pp. 1842-1858, Dec. 1989. |
Patterson, David A., et al., Introduction to Redundant Arrays of Inexpensive Disks (RAID). In IEEE Spring 89 COMPCON, San Francisco, IEEE Computer Society Press, Feb. 27-Mar. 3, 1989, pp. 112-117. |
Patterson, D., et al., A Case for Redundant Arrays of Inexpensive Disks (RAID), SIGMOND International Conference on Management of Data, Chicago, IL, USA, Jun. 1-3, 1988, SIGMOND Record (17):3:109-16 (Sep. 1988). |
Patterson, D., et al., A Case for Redundant Arrays of Inexpensive Disks (RAID), Technical Report, CSD-87-391, Computer Science Division, Electrical Engineering and Computer Sciences, University of California at Berkeley (1987), 26 pages. |
High Performance Voting for Data Encription Standard Engine Data Integrity and Reliability, IBM Technical Disclosure Bulletin, Nov. 1, 1993 (19931101), vol. 36, Issue #11, pp. 189-192. |
Gregory, T. Byrd, Producer-Consumer Communication in Distributed Shared Memory Multiprocessors, 1999. |
Abdel-Shafi, Hazim, et al., “An Evaluation of Fine-Grain Producer-Initiated Communication in Cache-Coherent Multiprocessors”, IEEE Proceedings of the Third International Symposium on High Performance Computer Architecture, Feb. 1997, San Antonio, TX, 8 pages. |
Zhang et al., “VCluster: a thread-based Java middleware for SMP and heterogeneous clusters with thread migration support”, Nov. 21, 2007, Wiley InterScience. |
Salehi et al, “The performance impact of scheduling for cache affinity in parallel network processing,” Issue date: Aug. 2-4, 1995, pp. 66-77. |
Mills, David L., “Network Time Protocol (version 3) Specification, Implementation and Analysis,” Network Working Group, XP002935527, Mar. 1, 1992, pp. i-vii and 1-113. |
Network Appliance, Inc., “Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration,” International Filing Date: Mar. 19, 2008, International Application No. PCT/US2008/003554, Date of Mailing: Aug. 26, 2008, pp. 1-14. |
Network Appliance, Inc., “Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration,” International Filing Date: Mar. 19, 2008, International Application No. PCT/US2008/003612, Date of Mailing: Nov. 5, 2008, pp. 1-17. |
Network Appliance, Inc., “Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration,” International Filing Date: Mar. 20, 2008, International Application No. PCT/US2008/003692, Date of Mailing: Nov. 5, 2008, pp. 1-17. |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, International Application No. PCT/US2008/004766, International Filing Date: Apr. 14, 2008, Date of Mailing of Document: May 12, 2009, 17 pages. |
Isci, Canturk, et al., “An Analysis of Efficient Multi-Core Global Power Management Policies: Maximizing Performance for a Given Power Budget”, the 39th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO '06), Dec. 1, 2006, 12 pages. |
Oklobdzija, Vojin G., “The Computer Engineering Handbook”, CRC Press, 2002, ISBN: 0849308852, 9780849308857, pp. 8-23-8-25. |