Storage consolidation architectures allow businesses to efficiently allocate storage resources across the enterprise as well as rapidly expand storage capacity, performance, and availability to meet the demands of a growing and changing business. One such architecture may use storage area network (“SAN”) based storage systems. A SAN based storage system may provide multiple, virtual volumes that may be mapped to various user systems. For example, in an enterprise with hundreds of users, a single SAN device may be used to host a virtual volume from which each user may boot and execute software. The user machines may be thin clients connected to the associated storage volumes over a network.
Since storage consolidation may involve one, or a few, storage servers or SAN devices answering to multiple clients, hard disk access at the SAN device may significantly increase. This may be particularly so during peak times for the booting of clients, such as in the morning. During these peak times, client boot times may be adversely impacted, negatively affecting productivity.
It is with respect to these considerations and others that the disclosure made herein is presented.
Technologies are described herein for accelerating the boot process of client computers by consolidating client-specific boot data in a data storage system. Through the utilization of the technologies and concepts presented herein, boot statistics are collected for a number of client computers booting from virtual storage volumes provided by the data storage system. The boot statistics are analyzed to identify client-specific boot data stored on each of the virtual storage volumes, and the client-specific boot data is consolidated and copied into contiguous regions of a single, consolidated boot volume in the data storage system. Requests for read operations for the client-specific boot data during boot of the client computers can then be redirected to the consolidated boot volume, reducing the disk thrashing that may take place during the concurrent booting of multiple client computers connected to the storage system, thereby increasing boot performance for the client computers.
It should be appreciated that the above-described subject matter may also be implemented as a computer-controlled apparatus, a computer process, a computing system, or as an article of manufacture such as a computer-readable storage medium. These and various other features will be apparent from a reading of the following Detailed Description and a review of the associated drawings.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended that this Summary be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
The following detailed description is directed to technologies for consolidating client-specific boot data from multiple storage volumes in a data storage system to a single, high-performance consolidated boot volume. The client-specific boot data is defragmented and copied to contiguous regions of the consolidated boot volume, and an SLT is created for each storage volume mapping the location on the storage volume of the client-specific boot data to the copy of the data on the consolidated boot volume. This allows read requests for the client-specific boot data from client computers booting from the storage volumes to be redirected to the consolidated boot volume, thereby reducing the disk thrashing that may take place during the concurrent booting of multiple client computers and increasing boot performance for the client computers.
While the subject matter described herein is presented in the general context of program modules that execute on one or more storage nodes of a storage system, those skilled in the art will recognize that other implementations may be performed in combination with other types of program modules. Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the subject matter described herein may be practiced with other computer system configurations, including multiprocessor systems, microprocessor-based systems, programmable consumer electronics, minicomputers, mainframe computers, special-purposed hardware devices, network appliances, and the like.
In the following detailed description, references are made to the accompanying drawings that form a part hereof, and that show, by way of illustration, specific embodiments or examples. Like numerals represent like elements throughout the several figures.
Each storage node 106 includes one or more mass storage devices or “disks” 108A-108S (referred to collectively herein as disks 108). According to one embodiment, the disks 108 are traditional hard disk drives. Further examples of disks 108 may include optically scanned media, solid-state media, non-volatile memories, or tape media; each, or in combination, employing magnetic, capacitive, optical, semiconductor, electrical, quantum, dynamic, static, or any other data storage technology. The disks 108 may be operatively connected to the storage node 106 using IDE, ATA, SATA, PATA, SCSI, USB, PCI, Firewire, FC, or any other bus, link, connection, protocol, network, controller, or combination thereof for I/O transfers.
According to implementations, a storage node 106 may be housed in a one rack space or “1U” unit storing up to four disks 108. For example, the storage node 106A is a 1U computing system that includes four disks 108A-108D. Alternatively, a storage node 106 may be housed in a three rack space or “3U” unit storing up to fifteen disks. For example, the storage node 106G is a 3U computing system that includes fourteen disks 108E-108S. Other types of enclosures may also be utilized for the storage nodes 106 that occupy more or fewer rack units and that store fewer or more disks 108. In this regard, it should be appreciated that the type of storage enclosure and number of disks 108 utilized by a storage node 106 is not generally significant to the implementation of the embodiments described herein. Any type of storage enclosure and virtually any number of disks or other types of mass storage devices may be utilized.
All of the storage nodes 106 in the clusters 104 may be physically housed in the same rack, located in the same building, or distributed over geographically diverse locations, such as various buildings, cities, or countries. Through the use of network ports and other appropriate network cabling and equipment, each storage node 106 within a cluster 104 is communicatively connected to the other nodes within the cluster. Many different types and number of connections may be made between the nodes of each cluster. The storage nodes 106 may be interconnected by any type of network or communication links, such as an Ethernet or Gigabyte Ethernet LAN, a fiber ring, a fiber star, wireless, optical, satellite, a WAN, a MAN, or any other network technology, topology, protocol, or combination thereof. One or more virtual storage clusters 104 may be further communicatively connected together to form the storage system 102.
Each storage node 106 of a cluster 104 may be configured to handle I/O operations independently, but the nodes of the cluster may be exposed to an initiator of an I/O operation as a single, consolidated storage device. It should be appreciated that a storage cluster 104 may include any number of storage nodes 106. A virtualized cluster 104 in which each storage node 106 contains an independent processing unit, and in which each node can field I/Os independently (and route them according to the cluster layout) is referred to as a horizontally virtualized or peer cluster. A cluster 104 in which each storage node 106 provides storage, but the processing and mapping is done completely or primarily in a single node, is referred to as a vertically virtualized cluster.
Data stored in the storage system 102 may be striped across the storage nodes 106 of each cluster 104, or across the storage clusters of the storage system. Striping data across nodes generally ensures that different I/O operations are fielded by different nodes, thereby utilizing all of the nodes simultaneously, and that the same I/O operation is not split between multiple nodes. Striping the data in this manner provides a boost to random I/O performance without decreasing sequential I/O performance. In addition, one or more disks 108 within a storage node 106, within each cluster 104, or across the clusters of the storage system 102 may contain mirrored data or parity data to provide data redundancy and protection against failure of one or more of the disks 108.
According to embodiments, one or more storage nodes 106 and/or clusters 104 of the storage system 102 may be consolidated and exposed to initiators as a single storage device, such as a storage area network (“SAN”) device. A storage processor module 110 is responsible for consolidating and mapping storage across the storage nodes 106 of the storage system 102 as well as coordinating the activities of the nodes. The storage processor module 110 may be implemented in hardware or software on one or more of the storage nodes 106 in the storage system 102, or it may reside in another computing device operatively connected to the storage nodes. In one embodiment, the storage processor module 110 may embody multiple modules executing on and cooperating between the processing units of multiple storage nodes, such as nodes 106E and 106E as shown in
One or more client computers 112A-112C (referred to generally herein as client computers 112) may further be connected to the storage system 102 via a network 114. The network 114 may be any type of network or communication link, such as an Ethernet or Gigabyte Ethernet LAN, a fiber ring, a fiber star, wireless, optical, satellite, a WAN, a MAN, or any other network technology, topology, protocol, or combination thereof. An appropriate protocol, such as the Internet small computer systems interface (“iSCSI”) or fiber channel (“FC”) protocols, may be utilized to enable the client computers 112 to communicate with the storage system 102 and utilize the various functions provided by the storage processor module 110 over the network 114.
In one embodiment, the client computers 112 may be “thin clients,” operating without local storage but configured to use virtual storage volumes provided by the storage system 102 as their primary storage devices for booting, software execution, and data storage. The client computers 112 can boot from their respective virtual volumes provided by the storage system 102. However, to reduce the boot delay that may be experienced during peak boot times, such as at the beginning of a work day when a substantial number of client computers 112 may be booting simultaneously, a mechanism to accelerate the boot process may be implemented.
To accelerate the boot process, the common boot data 204 may be copied to an enhanced cache referred to as a boot cache 208, as shown in
According to embodiments, the client-specific boot data 206 can be de-fragmented and relocated to a single, consolidated boot volume 210, as shown in
As further shown in
Turning now to
The storage processor module 110 may gather the boot statistics during the boot period in a table or other data structure, such as the boot pattern statistics table 400 illustrated in
From operation 302, the routine 300 proceeds to operation 304, where the storage processor module 110 analyzes the boot statistics collected in the boot pattern statistics table 400 to determine the portions or “blocks” of data that were read, but not written to, by the client computer 112 during the boot period. According to one embodiment, only client-specific boot data which is read and not written during the boot period is copied to the consolidated boot volume 210. To determine the blocks of data that were subject to pure read I/O operations during the boot period, the storage processor module 110 may utilize two data structures or bitmaps, a write bitmap 502 and a pure read bitmap 504, as shown in
The bitmaps 502, 504 are initialized such that no bits are set, as shown in
Next, the routine 300 proceeds from operation 304 to operation 306, where the storage processor module 110 iterates the pure read bitmap 504 to create an array listing each individual block of data read by the client-computer during the boot phase. This array may be stored in a data structure such as the storage volume array 600 shown in
From operation 306, the routine 300 proceeds to operation 308, where the storage processor module 110 identifies those blocks in the storage volume array 600 containing client-specific boot data 206. This is done by comparing each entry in the storage volume array 600 against a similarly structured reference volume array 606. The reference volume array 606 may contain entries representing blocks of a typical storage volume 202 containing common boot data 204. According to embodiments, these blocks of data may be available in the boot cache 208 during the boot of the OS. As in the case of the storage volume array 600, the entries of the reference volume array 606 may be sorted in signature field 604 order.
The storage processor module 110 processes each entry in the storage volume array 600 to determine if there exists an entry in the reference volume array 606 with a matching signature value 604. Because the storage volume array 600 and reference volume array are sorted in signature field order, this process is very efficient, allowing the entries in each array to be iterated in one direction. For each entry in the storage volume array 600, if no entry exists in the reference volume array having a matching signature value 604, then the block of the storage volume 202 identified in the block number field 602 of the entry is designated to contain client-specific boot data 206. The block number containing the client-specific boot data 206 may then be inserted into a table of unique boot blocks 700, as shown in
If an entry in the reference volume array 606 is located having a matching signature value 604, then the data in the block identified by the block number field 602 of the storage volume array 600 and the data in the block identified by the block number field 602 of the reference volume array 606 are compared. Although signatures, such as CRCs, may match for a pair of blocks, there remains a possibility that the data might not entirely match. Thus, the actual data is compared to determine if the block on the storage volume 202 contains common boot data 204 or client-specific boot data 206. If the data in the block of the storage volume 202 does not match the data in the corresponding block on the reference volume, then the block number containing the client-specific boot data 206 is inserted into the table of unique boot blocks 700. This process continues until the storage processor module 110 has processed each entry in the storage volume array 600. Upon completion of the process, the table of unique boot blocks 700 contains a list of the blocks on the target storage volume 202 containing client-specific boot data 206.
Next, the routine 300 proceeds from operation 308 to operation 310, where the storage processor module 110 copies the blocks of data containing client-specific boot data 206 from the storage volume 202 to the allocated region 212 of the consolidated boot volume 210. The storage processor module 110 copies the blocks identified in the table of unique boot blocks 700. The blocks are copied in a contiguous fashion into the allocated region 212 of the consolidated boot volume 210. The blocks may be ordered in the allocated region according to access patterns determined from the boot statistics collected in the boot pattern statistics table 400 in order to optimize data access during the boot process. For example, a sequence of blocks read in a single I/O operation may be copied sequentially into a contiguous region of the consolidated boot volume 210.
From operation 310, the routine 300 proceeds to operation 312, where the storage processor module 110 creates a start length table (“SLT”) for the target storage volume 202 such as the storage volume unique data SLT 800 illustrated in
Next, the routine 300 proceeds from operation 312 to operation 314, where the storage processor module 110 initializes a dirty read bitmap 900 for the allocated region 212 of the consolidated boot volume 210 corresponding to the target storage volume 202. As shown in
It will be appreciated that operations 302 through 308 of the routine 300 may be performed simultaneously with a procedure to identify and cache common boot data 204 to a boot cache 208, as described above in regard to
However, if the storage processor module 110 does find the requested block(s) in the SLT 800, then the routine 1000 proceeds from operation 1006 to operation 1010, where the storage processor module 110 determines whether the I/O operation is a write or a read operation. If the I/O operation is a write operation, then the routine 1000 proceeds to operation 1012, where the write I/O operation is executed against the storage volume 202. The storage processor module 110 then invalidates the corresponding blocks of data in the consolidated boot volume 210 by setting the bits in the dirty read bitmap 900 that correspond to the index field 808 from the entry(s) in the SLT 800 for the target blocks of data. It will be appreciated that other methods may be utilized by the storage processor module 110 for handling writes to data blocks copied to the consolidated boot volume 210, including writing the data blocks to the corresponding locations in the consolidated boot volume 210, and the copying the modified blocks from the consolidated boot volume 210, according to the dirty read bitmap 900, back to the storage volume 202 at a later time.
If, at operation 1010, the I/O operation is not a write operation, then the routine 1000 proceeds to operation 1014, where the storage processor module 110 checks the bit in the dirty read bitmap 900 corresponding to each block of data to be retrieved from the consolidated boot volume 210 based on the index field 808 from the entry(s) in the SLT 800 for the target blocks of data. If the bit corresponding to the block of data targeted by the read I/O operation is set in the dirty read bitmap 900, then the corresponding block of data on the consolidated boot volume 210 has been invalidated by a write operation, for example, and the block of data will be read directly from the storage volume 202 at operation 1016. If, however, the bit for the block of data is not set in the dirty read bitmap 900, then the routine 1000 proceeds from operation 1014 to operation 1018, where the storage processor module 110 retrieves the block of data from the consolidated boot volume 210 at a location indicated by the index field 808 in the corresponding entry of the SLT 800. From operation 1018, the routine 1000 ends.
In particular,
The CPU 1122 performs the necessary operations by transitioning from one discrete, physical state to the next through the manipulation of switching elements that differentiate between and change these states. Switching elements may generally include electronic circuits that maintain one of two binary states, such as a flip-flops, and electronic circuits that provide an output state based on the logical combination of the states of one or more other switching elements, such as logic gates. These basic switching elements may be combined to create more complex logic circuits, including registers, adders-subtractors, arithmetic logic units, floating-point units, and other logic elements.
The chipset 1152 includes a north bridge 1124 and a south bridge 1126. The north bridge 1124 provides an interface between the CPU 1122 and the remainder of the computer system 1100. The north bridge 1124 also provides an interface to a random access memory (RAM) used as the main memory 1154 in the computer system 1100 and, possibly, to an on-board graphics adapter 1130. The north bridge 1124 may also include functionality for providing networking functionality through a gigabit Ethernet adapter 1128. The gigabit Ethernet adapter 1128 is capable of connecting the computer system 1100 to another computer via a network. Connections which may be made by the network adapter 1128 may include LAN or WAN connections. LAN and WAN networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet. The north bridge 1124 is connected to the south bridge 1126.
The south bridge 1126 is responsible for controlling many of the input/output functions of the computer system 1100. In particular, the south bridge 1126 may provide one or more universal serial bus (USB) ports 1132, a sound adapter 1146, an Ethernet controller 1160, and one or more general purpose input/output (GPIO) pins. The south bridge 1126 may also provide a bus for interfacing peripheral card devices such as a graphics adapter 1162. In one embodiment, the bus comprises a peripheral component interconnect (PCI) bus. The south bridge 1126 may also provide a system management bus for use in managing the various components of the computer system 1100.
The south bridge 1126 is also operative to provide one or more interfaces for connecting mass storage devices to the computer system 1100. For instance, according to an embodiment, the south bridge 1126 includes a serial advanced technology attachment (SATA) adapter 1136 for connecting one or more SATA disk drives 1138. The mass storage devices connected to the interfaces of the south bridge may provide non-volatile storage for the computer system 1100.
The computer system 1100 may store information in the mass storage devices by transforming the physical state of the device to reflect the information being stored. The specific transformation of physical state may depend on various factors, in different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the mass storage devices, whether the mass storage devices are characterized as primary or secondary storage, and the like. For example, the computer system 1100 may store information to the SATA disk drive 1138 by issuing instructions to the SATA adapter 1136 to alter the magnetic characteristics of a particular location within the SATA disk drive. These transformations may also include altering the physical features or characteristics of other types of media, including altering the reflective or refractive characteristics of a particular location in an optical storage device, or modifying the electrical characteristics of a particular capacitor, transistor, or other discrete component in a solid-state storage device. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this discussion. The computer system 1100 may further read information from the mass storage device by detecting the physical states or characteristics of one or more particular locations within the mass storage device.
The SATA disk drive 1138 may store an operating system 1140 utilized to control the operation of the computer system 1100. According to one embodiment, the operating system 1140 comprises the LINUX operating system. According to another embodiment, the operating system 1140 comprises the WINDOWS® SERVER operating system from MICROSOFT CORPORATION. According to further embodiments, the operating system 1140 may comprise the UNIX or SOLARIS operating systems. It should be appreciated that other operating systems may also be utilized. The SATA disk drive 1138 may store other system or application programs and data utilized by the computer system 1100. In one embodiment, the SATA disk drive 1138 may store the storage processor module 110 described above in regard to
In addition to the mass storage devices described above, the computer system 1100 may have access to other computer-readable storage medium to store and retrieve information, such as program modules, data structures, or other data. It should be appreciated by those skilled in the art that computer-readable storage media can be any available media that can be accessed by the computer system 1100. By way of example, and not limitation, computer-readable storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for. Computer-readable storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid-state memory technology, CD-ROM, DVD, HD-DVD, BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer system 1100.
The computer-readable storage medium may be encoded with computer-executable instructions that, when loaded into the computer system 1100, may transform the computer system from a general-purpose computing system into storage system computer capable of implementing the embodiments described herein. The computer-executable instructions may be encoded on the computer-readable storage medium by altering the electrical, optical, magnetic, or other physical characteristics of particular locations within the media. These computer-executable instructions transform the computer system 1100 by specifying how the CPU 1122 transitions between states, as described above. According to one embodiment, the computer system 1100 may access computer-readable storage media storing computer-executable instructions that, when executed by the computer system, perform the routine 300 or the routine 1000, described above in regard to
A low pin count (LPC) interface may also be provided by the south bridge 1126 for connecting a “Super I/O” device 1170. The Super I/O device 1170 is responsible for providing a number of input/output ports, including a keyboard port, a mouse port, a serial interface 1172, a parallel port, and other types of input/output ports. The LPC interface may also connect a computer storage media such as a ROM or a flash memory, such as an NVRAM 1148 for storing the firmware 1150 that includes program code containing the basic routines that help to start up the computer system 1100 and to transfer information between elements within the computer system 1100. The NVRAM may also store portions of or the entire storage processor module 110, described above in regard to
Based on the foregoing, it should be appreciated that technologies for accelerating the boot process of client computers by consolidating client-specific boot data in a data storage system are presented herein. Although the subject matter presented herein has been described in language specific to computer structural features, methodological acts, and computer readable media, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features, acts, or media described herein. Rather, the specific features, acts and mediums are disclosed as example forms of implementing the claims.
The subject matter described above is provided by way of illustration only and should not be construed as limiting. Various modifications and changes may be made to the subject matter described herein without following the example embodiments and applications illustrated and described, and without departing from the true spirit and scope of the present invention, which is set forth in the following claims.
This application is a continuation application of U.S. patent application Ser. No. 12/435,602, entitled “Boot Acceleration By Consolidating Client-Specific Boot Data In A Data Storage System,” filed May 5, 2009, which will issue as U.S. Pat. No. 8,799,429 on Aug. 5, 2014, which claims the benefit of U.S. provisional patent application No. 61/050,879, filed on May 6, 2008, entitled “Boot Acceleration by Defragmenting Client Specific Boot Data in a Data Storage System,” both of which are expressly incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4814981 | Rubinfeld | Mar 1989 | A |
4942579 | Goodlander et al. | Jul 1990 | A |
4972316 | Dixon et al. | Nov 1990 | A |
5060144 | Sipple et al. | Oct 1991 | A |
5257367 | Goodlander et al. | Oct 1993 | A |
5353430 | Lautzenheiser | Oct 1994 | A |
5530850 | Ford et al. | Jun 1996 | A |
5579516 | Van Maren et al. | Nov 1996 | A |
5619690 | Matsumani et al. | Apr 1997 | A |
5720027 | Sarkozy et al. | Feb 1998 | A |
5732238 | Sarkozy | Mar 1998 | A |
5732265 | Dewitt et al. | Mar 1998 | A |
5761709 | Kranich | Jun 1998 | A |
5771354 | Crawford | Jun 1998 | A |
5778430 | Ish et al. | Jul 1998 | A |
5790774 | Sarkozy | Aug 1998 | A |
5809560 | Schneider | Sep 1998 | A |
5819292 | Hitz et al. | Oct 1998 | A |
5822773 | Pritchard et al. | Oct 1998 | A |
5884093 | Berenguel et al. | Mar 1999 | A |
5893919 | Sarkozy et al. | Apr 1999 | A |
5974426 | Lee et al. | Oct 1999 | A |
5990810 | Williams | Nov 1999 | A |
6038570 | Hitz et al. | Mar 2000 | A |
6098128 | Velez-McCaskey et al. | Aug 2000 | A |
6202121 | Walsh et al. | Mar 2001 | B1 |
6205450 | Kanome | Mar 2001 | B1 |
6216207 | Miller et al. | Apr 2001 | B1 |
6298425 | Whitaker et al. | Oct 2001 | B1 |
6324546 | Ka et al. | Nov 2001 | B1 |
6366988 | Skiba et al. | Apr 2002 | B1 |
6389513 | Closson | May 2002 | B1 |
6434681 | Armangau | Aug 2002 | B1 |
6460054 | Grummon | Oct 2002 | B1 |
6591334 | Shyam et al. | Jul 2003 | B1 |
6591347 | Tischler et al. | Jul 2003 | B2 |
6711624 | Narurkar et al. | Mar 2004 | B1 |
6892211 | Hitz et al. | May 2005 | B2 |
7043637 | Bolosky et al. | May 2006 | B2 |
7051165 | Kimura et al. | May 2006 | B2 |
7072916 | Lewis et al. | Jul 2006 | B1 |
7080104 | Ring et al. | Jul 2006 | B2 |
7111026 | Sato | Sep 2006 | B2 |
7197606 | Kobayashi et al. | Mar 2007 | B2 |
7246208 | Hoshino et al. | Jul 2007 | B2 |
7249218 | Gibble et al. | Jul 2007 | B2 |
7308536 | Arimilli et al. | Dec 2007 | B2 |
7359937 | Douceur et al. | Apr 2008 | B2 |
7373366 | Chatterjee et al. | May 2008 | B1 |
7398382 | Rothman et al. | Jul 2008 | B2 |
7424514 | Noble et al. | Sep 2008 | B2 |
7454571 | Sucharitakul | Nov 2008 | B1 |
7457934 | Yagawa | Nov 2008 | B2 |
7487342 | Cronk et al. | Feb 2009 | B2 |
7536529 | Chatterjee et al. | May 2009 | B1 |
7607000 | Smith et al. | Oct 2009 | B1 |
7689766 | Chatterjee et al. | Mar 2010 | B1 |
7737673 | Xi et al. | Jun 2010 | B2 |
7747584 | Jernigan, IV | Jun 2010 | B1 |
7840537 | Gokhale et al. | Nov 2010 | B2 |
7899789 | Schwaab et al. | Mar 2011 | B2 |
7930312 | Hild et al. | Apr 2011 | B2 |
7987156 | Chatterjee et al. | Jul 2011 | B1 |
8001323 | Honma | Aug 2011 | B2 |
8024542 | Chatterjee et al. | Sep 2011 | B1 |
8082407 | Chatterjee et al. | Dec 2011 | B1 |
8117158 | Chatterjee et al. | Feb 2012 | B1 |
8140775 | Chatterjee et al. | Mar 2012 | B1 |
8260744 | Chatterjee et al. | Sep 2012 | B1 |
8332844 | Kulkarni et al. | Dec 2012 | B1 |
8352716 | Chatterjee et al. | Jan 2013 | B1 |
8370597 | Chatterjee et al. | Feb 2013 | B1 |
8402209 | Chatterjee et al. | Mar 2013 | B1 |
8549230 | Chatterjee et al. | Oct 2013 | B1 |
8732411 | Chatterjee et al. | May 2014 | B1 |
8775786 | Chatterjee et al. | Jul 2014 | B1 |
8799429 | Chatterjee et al. | Aug 2014 | B1 |
8799595 | Chatterjee et al. | Aug 2014 | B1 |
20010049771 | Tischler et al. | Dec 2001 | A1 |
20020161983 | Milos et al. | Oct 2002 | A1 |
20030115301 | Koskimies | Jun 2003 | A1 |
20030126242 | Chang | Jul 2003 | A1 |
20030142561 | Mason et al. | Jul 2003 | A1 |
20030163630 | Aasheim et al. | Aug 2003 | A1 |
20040030727 | Armangau et al. | Feb 2004 | A1 |
20040128470 | Hetzler et al. | Jul 2004 | A1 |
20040153383 | K et al. | Aug 2004 | A1 |
20040186898 | Kimura et al. | Sep 2004 | A1 |
20050044346 | Cronk et al. | Feb 2005 | A1 |
20050177684 | Hoshino et al. | Aug 2005 | A1 |
20050216538 | Douceur et al. | Sep 2005 | A1 |
20050283575 | Kobayashi et al. | Dec 2005 | A1 |
20060143432 | Rothman et al. | Jun 2006 | A1 |
20060218364 | Kitamura | Sep 2006 | A1 |
20060288202 | Doran et al. | Dec 2006 | A1 |
20070075694 | Xi et al. | Apr 2007 | A1 |
20070192763 | Helvick | Aug 2007 | A1 |
20070255758 | Zheng et al. | Nov 2007 | A1 |
20080005141 | Zheng et al. | Jan 2008 | A1 |
20080082812 | Kirshenbaum et al. | Apr 2008 | A1 |
20080104107 | Schwaab et al. | May 2008 | A1 |
20080155243 | Diep et al. | Jun 2008 | A1 |
20080229040 | Honma | Sep 2008 | A1 |
20080243879 | Gokhale et al. | Oct 2008 | A1 |
20090007261 | Smith | Jan 2009 | A1 |
20100017591 | Smith et al. | Jan 2010 | A1 |
Entry |
---|
US 6,988,220, 1/2006, Eng et al. (withdrawn). |
Douglis, F., et al., “Log-Structured File Systems,” IEEE, 1989, pp. 124-129. |
“Elementary Data Structures,” http://www2.toki.or.id/book/AlgDesignManual/LEC/LECTUR17/NODE7.HTM, Jun. 2, 1997, accessed Feb. 29, 2008, 10 pages. |
Green, R.J., et al., “Designing a Fast, On-line Backup System for a Log-Structured File System,” Digital Technical Journal, vol. 8, No. 2, 1996, pp. 32-45. |
Peterson, Z., et al., “Ext3cow: A Time-Shifting File System for Regulatory Compliance,” ACM Transactions on Storage, vol. 1, No. 2, May 2005, pp. 190-212. |
Rosenblum, M., et al., “The Design and Implementation of a Log-Structured File System,” ACM Transactions on Computer Systems, vol. 10, No. 1, Feb. 1992, pp. 26-52. |
U.S. Appl. No. 12/200,279, filed Aug. 28, 2008 entitled “Eliminating Duplicate Data in Storage Systems With Boot Consolidation,” Inventors: Chatterjee et al. |
U.S. Appl. No. 12/435,602, filed Aug. 28, 2008, entitled “Boot Acceleration by Consolidating Client Specific Boot Data in a Data Storage System,” Inventors: Chatterjee et al. |
U.S. Appl. No. 12/355,439, filed Jan. 16, 2009 entitled “Boot Caching for Boot Acceleration within Data Storage Systems,” Inventors: Chatterjee et al. |
U.S. Official Action, dated Jan. 5, 2012, received in connection with related U.S. Appl. No. 12/355,439. |
U.S. Official Action, dated Nov. 10, 2011, received in connection with related U.S. Appl. No. 12/200,279. |
U.S. Appl. No. 12/104,116, filed Apr. 16, 2008 entitled “Writable Snapshots for Boot Consolidation,” Inventors: Chatterjee et al. |
Number | Date | Country | |
---|---|---|---|
20150012628 A1 | Jan 2015 | US |
Number | Date | Country | |
---|---|---|---|
61050879 | May 2008 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12435602 | May 2009 | US |
Child | 14450855 | US |