Method for file based shingled data storage utilizing multiple media types

Information

  • Patent Grant
  • 8756382
  • Patent Number
    8,756,382
  • Date Filed
    Thursday, June 30, 2011
    13 years ago
  • Date Issued
    Tuesday, June 17, 2014
    10 years ago
Abstract
The present invention relates to methods and systems for efficiently accessing data stored on a data storage device. The data storage device may comprise various types of media, such as shingled media and non-shingled media, alone or in combination. The data storage device may employ a logical block address space for specifying location of blocks of data stored on the data storage device. In addition, pre-determined sequential ranges of logical block addresses are grouped together and may be referenced collectively. In some embodiments, each type of media type may be partitioned into sections for containing different sizes of collections. Each collection of logical block addresses may be allocated to an arbitrary logical slot. Each logical slot may then be linked to a physical slot on the data storage device.
Description
BACKGROUND

Typically, files are stored on a data storage system, such as a hard disk, solid-state drive, or hybrid drive. The files are written using file system data structures using logical blocks that are understood by the operating system of a host. The logical blocks are then mapped to a physical location on the data storage device. Conventionally, the logical blocks are a relatively small size, such as 4 k. Accordingly, a file is broken down into a plurality of blocks when stored.


The operating system will organize files on the disk, such as, file names and the sequence of physical locations on the disk areas that hold the file. One problem with breaking down a file from a host into small blocks is fragmentation. Fragmentation is a condition in which a requested file has been broken up into blocks and these blocks are scattered around at different locations on a disk. Over time, fragmentation of files will increase on a disk as files are updated, added, and deleted. Eventually, without correction, fragmentation can significantly degrade the performance of the disk because each file has been broken up into numerous pieces.


When a file is broken up, the disk requires a plurality of input (I/O) disk operations in order to retrieve and assemble the data for a requested file. When it takes more than one disk operation to obtain the data for a fragmented file, this is known as a split transfer or split I/O. For every split transfer, the overhead of each disk I/O transfer is added. The more I/O requests, the longer user applications running on a host device must wait for I/O to be processed.


Unfortunately, fragmentation of a drive is not a simple problem. Files for different media types will have different sizes and different access characteristics by the application using that file. Accordingly, it would be desirable to provide methods and systems for storing and accessing files stored on a data storage device more efficiently.





BRIEF DESCRIPTION OF THE DRAWINGS

Systems and methods which embody the various features of the invention will now be described with reference to the following drawings, in which:



FIGS. 1 and 1A is a block diagram illustrating a data storage device according to one embodiment.



FIG. 2 illustrates an exemplary file structure for a shingled file according to one embodiment.



FIG. 3 illustrates an exemplary translation of logical slots to physical locations on the data storage device.



FIG. 4 illustrates an exemplary translation table according to one embodiment.



FIG. 5 illustrates an exemplary translation table entry of the translation table shown in FIG. 4.



FIG. 6 illustrates an exemplary garbage collection process according to one embodiment.



FIG. 7 illustrates an exemplary defect mapping according to one embodiment.



FIG. 8 illustrates an exemplary file migration according to one embodiment.



FIG. 9 illustrates an exemplary recovery from an event according to one embodiment.





DETAILED DESCRIPTION

The present invention is directed to systems and methods for writing host data to “files” on the media, rather than in blocks. A data storage file is a sequential range of logical block address LBA space, which are referred to as slots in the present disclosure. In other words, a file or slot is pinned to a contiguous LBA range. The “files” on the media are not required to be contiguous or physically “pinned”. For shingled media, the files are shingled and therefore can be written at the end of a shingle or separated by guard bands.


A data storage device of the present invention may contain any combination of shingled and non-shingled media. Therefore, files on the media may be either shingled or non-shingled. Since the files contain LBAs that are sequential, this design can take advantage of speculative reading for performance.


Files may contain a pre-designated range of LBAs where the LBAs are contiguous and sequential with in the file. For example a file may contain 4 MB of 512 or 4 k LBAs. For maximum format efficiency in shingled media, files in a shingled area are allocated at the end of the shingle. For non-shingled media, the files can be allocated at any block address within the non-shingled area. The files can be allocated dynamically.


Each media type of the data storage device may be split into sections or zones. Each section or zone may have different file sizes based on a number of conditions or requirements. Multiple shingle zones may be based on different files sizes (1/2/4 MB, etc.). In one embodiment, the data storage device utilizes a translation table and garbage collection process that is aware of these media zones for different files.


Certain embodiments of the inventions will now be described. These embodiments are presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions. To illustrate some of the embodiments, reference will now be made to the figures.



FIG. 1 shows a data storage device 50 according to one embodiment. The data storage device 50 includes a head 21 actuated radially over a disk surface 41 by an actuator arm 61, and a voice coil motor (VCM) 8 operable to rotate the actuator arm 61 about a pivot. The disk surface 4, such as 41, comprises a host addressable area 10 with a plurality of data tracks 18, wherein each data track 18 comprises a plurality of data blocks 20.


In the embodiment in FIG. 1, the disk surface 41 further comprises a plurality of embedded servo sectors 301-30N that define the data tracks 18 in the host addressable area 10. The data storage device 50 further comprises control circuitry 32, which is operable to process a read signal 34 emanating from the head 21 to demodulate the embedded servo sectors 301-30N and generate a position error signal (PES). The PES represents a radial offset of the head 21 from a target data track 18 in the host addressable area 10. The control circuitry 32 is further operable to process the PES with a suitable servo compensator to generate a VCM control signal 36 applied to the VCM 8. The VCM 8 rotates the actuator arm 61 about a pivot in order to actuate the head 21 radially over the disk surface 41 in a direction that decreases the PES. The control circuitry 32 is also configured to receive commands from a driver 58 in the host system 56. Of note, the embodiments of the present invention may operate in form of interface communications between the host 56 and data storage device 50. For example, the embodiments may be applied to SCSI, SATA, SAS, etc., communications.


For shingled files, a write to the same LBA(s) may require a read of the file containing the LBA(s), a merge of the file read data and host data and a write to a new file at the end of shingle. Read, Modify, Writes of shingled files may not require a read first, if the entire files is contained in cache and/or transferred from the host 56.


For non-shingled files, a write to the same LBA(s) may occur in place requiring no further action or file allocation. The use of a non-shingled area is to have a set of files that can be randomly written quickly without the need for read, modify and write, resulting in a fast write area. This area will be used for random host writes as well. This area can be large enough to hold, for example, an operating system plus most frequently used user data. Less frequently used data can be migrated to the shingled files area. This will allow for standard disk drive performance during random writes and mitigates random write performance by allowing re-writing of data in place. Non-shingled files do not require adjacent or range of files to be located in non-shingled file area.


In one embodiment, the data storage device 50 may be a hybrid drive, and thus, may further comprise a solid-state storage space, such as a semiconductor memory (SM) 38 communicatively coupled to the control circuitry 32. The SM 38 may serve as a cache for storing write data received from the host 56 via a write command and read data requested by the host 56 via a read command. The SM 38 can be implemented with one or more memories, for example, using dynamic random access memory (DRAM), flash memory, or static random access memory (SRAM).


In addition, the SM 38 may provide storage for a translation table used by the control circuitry 32. The translation table provides a data structure for mapping the logical block addresses (LBAs) requested by the host 56 into physical locations on the data storage device 50, such as on disk surfaces 4. On data storage devices that implement shingled magnetic recording, such as those shown in FIG. 2, commands from the host 56 are recorded sequentially, resulting in LBA indirection. Since there is no fixed location on the data storage device 50 for a given LBA, control circuitry 32 maintains the translation table to keep track of the locations of the LBAs. For example, when a LBA is rewritten, a newer copy of that LBA will be written in a new location on the disk surfaces 4, and control circuitry 32 then updates the translation table to reflect the latest physical location of that LBA.


A translation table, however, may consume significant memory resources, for example, of the SM 38, especially for larger capacity data storage devices. In one embodiment, to minimize the space used by the translation table, it may be made run-length enabled.


Accordingly, the embodiments of the present invention provide a way to optimize the size of a translation table and read performance. For example, in one embodiment, defragmentation is performed regularly to avoid excessive fragmentation of the drive. Defragmentation is a process where fragmented LBAs are read and rewritten sequentially to avoid fragmentation. In addition, the translation table is updated and optimized to reduce its required size.



FIG. 1A is a block diagram illustrating a shingled portion of the disk storage device 50 having a plurality of zones according to one embodiment. In other words, one embodiment of the data storage device 50 may be a shingled-based disk drive in which the control circuitry 32 accesses at least part of the storage using log structure writes wherein a band of the data tracks are accessed as a circular buffer. In the shingled-based disk drive embodiment, data tracks are written in a shingled (overlapping) manner.


As shown, for example, the data storage device 50 is divided into multiple zones, including a zone 1 (108), a zone 2 (110), and a zone 3 (112). A plurality of heads 21-24 are actuated over respective disk surfaces 41-44 by a VCM 8 which rotates actuator arms 61-63 about a pivot. In one embodiment, each of disk surfaces 41-44 comprises a host addressable area 10 comprising a plurality of data tracks 18. A zone may span multiple layers of the disk as shown. For example, zone 1 (108) may span a portion of the disk surfaces 41, 42, 43, and 44 as indicated by the bracket. Similarly, zone 2 (110) may span a portion of the disk surfaces 41, 42, 43, and 44 as indicated by the bracket.


As noted, each media type may be split into sections or zones. Each section or zone may have different file sizes based on a number of conditions or requirements. Multiple shingle zones may be based on different files sizes, such as 1 MB, 2 MB, 4 MB, and the like.



FIG. 2 illustrates an exemplary file structure for a shingled file according to one embodiment. As shown, shingled files may contain a pre-designated range of LBAs where the LBAs are contiguous and sequential with in the file. For example, as shown a file may contain 4 MB of LBAs. For maximum format efficiency in shingled media, files in a shingled area are allocated at the end of the shingle.


With this structure, translation of a file from logical to physical is merely an offset calculation. For example, a file containing LBAs n to n+100 can be located by adding the physical offset to the file to the physical block address representing LBA n. As noted, these physical and logical locations may be referred to as “slots” in the present disclosure. As will be described further below, the translation table may be configured based on the use of slots. For example, physical slots are contiguously allocated from the first addressable LBA. Files thus may reside in arbitrary logical slots with a 1:1 link to a physical slot.


Shingled files, regardless of LBA range within the file, are located at the end of the shingle in the next physical slot. Physical slots may be over provisioned for garbage collection and runway.


For shingled files, a write to the same LBA(s) may require a read of the file containing the LBA(s), a merge of the file read data and host data and a write to a new file at the end of shingle. Read, Modify, Writes of shingled files may not require a read first, if the entire files is contained in cache and/or transferred from the host 56.


In contrast, for non-shingled files, a write to the same LBA(s) may occur in place requiring no further action or file allocation. Accordingly, in one embodiment, data storage device 50 may employ the use of a non-shingled area to store files that can be randomly written quickly without the need for read, modify and write, resulting in a fast write area. This area may be used for random writes by host 56 for writes as well.


In one embodiment, the area is configured to hold the operating system of host 56 plus frequently used user data. Less frequently used data can be migrated to shingled files on the data storage device 50. One skilled in the art will recognize that this feature may allow for standard disk drive performance during Random Writes and mitigates random write performance by allowing re-writing of data in place.


The embodiments allow for any combination of shingled and non-shingled files. The use of both shingled and non-shingled files on a rotating media allows for different types of media access and performance. With the non-shingled zone being located at the fastest data rate area of a rotating media.


Any file can be written in any non-shingled physical slot. This allows for the flexibility of having any Host write (random or sequential) data to be written directly and allows for most recently Used and most accessed algorithms for populating the physical slots. However, over time, the files in these non-shingled slots in the non-shingled zone can be migrated to other media types and zones including shingled zones.


Alternatively, the data storage device 50 may use “pinned” LBA range(s). In other words, a slot is tied to a range of pinned LBAs for taking advantage of random write performance. This approach may be beneficial for certain types of files that only utilize a sub-capacity of the data storage device 50.



FIG. 3 illustrates an exemplary translation of logical slots to physical locations on the data storage device. FIG. 4 illustrates an exemplary translation table according to one embodiment.


As shown, the translation table can be sorted by a logical slot number. If a file is written to a logical slot, then the control circuitry 32 writes an entry that points to a physical slot on disk surface 4 of the data storage device 50. As can be seen, there is a 1:1 correlation between logical and physical slots. In one embodiment, unused physical slots are unassigned to logical slots.


With this feature of the embodiments, the translation table can be small enough to be direct-mapped, since it is essentially an array of entries directly indexed by logical address. That is, the entry for a logical slot points to a respective physical slot. The translation table thus does not have to describe extents and the translation table is not run length limited. Of note, reverse lookup can be achieved by sorting the translation table according to physical slot number and indexing as such.



FIG. 5 illustrates an exemplary translation table entry of the translation table shown in FIG. 4. As shown, an entry of the translation table may comprise a physical slot identifier, a starting block address (ABA), an ending block address (ABA), a file size, various flags, a read counter, and a write counter. Of course, other meta-data may be included in the translation table entries to facilitate other processes, such as garbage collection, defragmentation, check pointing, recovery, etc. In one embodiment, the translation table entry is configured to be a relatively small size, such as approximately 8 bytes. This allows for the translation table to consume a relatively small amount of memory in SM 38. For example, for a 1 TB and assuming a file size of 4 MB, the embodiments are able to provide a translation table that is about 2 MB.



FIG. 6 illustrates an exemplary garbage collection process according to one embodiment. In the embodiments, garbage collection may be performed on a “file” basis. As LBAs are re-written, the “file” undergoes a read modify write (RMW) to a new physical slot. The old physical slot thus becomes “garbage”. This “garbage” then presents a hole in the shingle, e.g., physical slots 2 and 3, as shown. In addition, the logical slots 1 and 2 have been moved and reassigned to physical slots 8 and 9, respectively. The translation table is updated to reflect this reassignment (not shown).


Next, garbage collection moves the stranded files caused by RMW to the end of shingle. The translation table is also updated with the new physical slot (not shown). Thus, the garbage collection may provide a constant contiguous shingle more efficiently based on the present embodiments. One advantage of this embodiment is that the notion of invalid data does not apply. Searching the translation table for valid LBAs is not required, as only invalid files exist. Accordingly, garbage collection is simplified to being a file read-write and translation table update. Of note, since non-shingled-files do not require a RMW at the end of a shingle, there will not be any garbage created in the non-shingled zone in one embodiment.


As RMWs are performed during normal operation, garbage collection will move the oldest written file. Thus, the embodiments may prevent any need to do periodic or urgent garbage collection. This provides predictable performance and overhead with a constant performance impact as at least one advantage of the present invention.


If requested or necessary, time-based or urgency-based garbage collection may be provided using arbitration. Arbitration may be employed to increase the performance impact when garbage increases and increase steady state performance.


Defragmentation of the data area for shingled files may be performed by sorting the files logically in the physical slots. Logical sorting can resemble the logical layout of conventional disk drives. Logically sorted, the data storage device 50 may be able to take advantage of speculative reading. Defragmentation can be a background activity to reduce performance impact


Of note, defragmentation is not required for non-shingled files or pinned files. Defragmentation is also not required for shingled files since the data storage device 50 will not run out of space in SM 38 for the translation table even without defragmentation.


Furthermore, in some embodiments, the control circuitry 32 may balance garbage collection and defragmentation in order to optimize the performance of the data storage device. For example, in the embodiments, since defragmentation may create garbage, the control circuitry 32 may be configured to proactively write valid data to these areas, especially if they are within proximity to a current defragmented file. Proximity may be based on a number of factors, such as distance, track proximity, seek time, etc.



FIG. 7 illustrates an exemplary defect mapping according to one embodiment. As shown, defects can be added to a defect mapping list without re-format. In particular, the defect can be added by inserting the entry for the physical address for all files after the growth defect and then adding a check point. Inserting of such an entry can be quickly performed since translation tables of the embodiments are relatively small and direct mapped



FIG. 8 illustrates an exemplary file migration according to one embodiment. As shown, files may be identical for all media types. This allows for easy file migration between disk surfaces 4 and SM 38. The same translation table can thus handle shingled and non-shingled files on all media types.


As shown, host data may be written to the flash memory in SM 38, which is used as a write cache. The SM 38 will allow random and sequential writes to occur at flash write rates. The writes to flash can then be migrated to the “file” on the disk-based storage of data storage device 50.



FIG. 9 illustrates an exemplary recovery from an event according to one embodiment. In one embodiment, changes to the translation table may be check pointed to media periodically to maintain the current status of the files on the media(s). The check point may be stored on non-volatile memory so that the translation table can be restored, e.g., at power on.


Thus, in a power loss situation, the translation table can be restored from non-volatile memory in SM 38. If check pointing of the translation table is done at the event of any changes to content of the translation table, prior to any writing to files, the current state of the translation table will be valid. In one embodiment, translation table updates may be a single sector write since the entries are less than 1 sector.


In the embodiments, check pointing at the event of any changes to the translation table can be used for multiple zones. The media(s) may be split into multiple zones for multiple active streams. The state of the streams can also be stored in the translation table entries and check pointed in the same way. In this embodiment, power on recovery would not be needed to restore the translation as the current state of the translation table is always preserved before any writing to files occur. In addition, since the current state of the translation table is preserved, meta data is not needed to rebuild the translation table.


If check pointing cannot be done on a translation table change, then meta data may be required and retrieved to re-build the translation table, for example, in the event of a power loss. The meta data may be any data that contains the logical to physical slot relationship and local and global sequence numbers for validation. For example, such meta data may be stored at the end of each file and systematically generated as they are written.


In one embodiment, in the event of a power loss, power on recovery may start at the end of shingle in the last known check point location and scan for meta data at the end of each file. The meta data will be compared against the logical and physical slots as well as the sequence numbers. If the meta data does not match up or does not exist, the translation table reconstruction is complete.


For example, as shown in FIG. 9, check pointed writes up to physical slot 9 are illustrated. The remaining three writes, however, are not check pointed. The last sequence numbers to be check pointed are thus 120/46. By scanning the files past the end of the shingle and reading the local sequence numbers, three more files in the shingle have been written. The global sequence number also indicates that there were no other zones updated prior. The slot relationship in the translation table is then updated and check pointed to complete power on recovery.


The features and attributes of the specific embodiments disclosed above may be combined in different ways to form additional embodiments, all of which fall within the scope of the present disclosure. Although the present disclosure provides certain embodiments and applications, other embodiments that are apparent to those of ordinary skill in the art, including embodiments, which do not provide all of the features and advantages set forth herein, are also within the scope of this disclosure. Accordingly, the scope of the present disclosure is intended to be defined only by reference to the appended claims.

Claims
  • 1. A method of writing host data to rotating media of a data storage device, wherein the rotating media includes a shingled area and a non-shingled area, said method comprising: receiving at least one logical block address for data to be written on the rotating media of the data storage device;determining a pre-designated range corresponding to the at least one logical block address;determining a logical slot for the pre-designated range corresponding to the at least one logical block address, wherein the logical slot corresponds to the shingled area or the non-shingled area of the rotating media based on a characteristic of the data; andwriting the data in a physical slot allocated on the rotating media for the logical slot.
  • 2. The method of claim 1, wherein the pre-designated range corresponding to the at least one logical block address comprises a contiguous, sequential range of logical block addresses.
  • 3. The method of claim 1, wherein the logical slot comprises a file size of one megabyte or less.
  • 4. The method of claim 1, wherein the logical slot comprises a file size of two megabyte or less.
  • 5. The method of claim 1, wherein the logical slot comprises a file size of four megabyte or less.
  • 6. The method of claim 1, wherein writing the data comprises determining a zone allocated on the rotating media that has been configured to store files for a threshold size.
  • 7. The method of claim 1, wherein the characteristic of the data includes at least one of a frequency of use of the data, whether the data can be written quickly, and whether the data pertains to a host operating system.
  • 8. The method of claim 1, further comprising migrating less frequently used data from the non-shingled area to the shingled area of the rotating media.
  • 9. A method of reading data from rotating media of a data storage device, wherein the rotating media includes a shingled area and a non-shingled area, said method comprising: receiving a read command from a host comprising at least one logical block address;determining a pre-designated range corresponding to the at least one logical block address specified in the read command;determining a logical slot for the pre-designated range, wherein the logical slot corresponds to the shingled area or the non-shingled area of the rotating media based on a characteristic of the data;translating the logical slot to a physical slot on the rotating media of the data storage device; andreading the requested data from the physical slot.
  • 10. The method of claim 9, wherein the pre-designated range comprises a contiguous sequential range of logical block addresses.
  • 11. The method of claim 10, further comprising speculatively reading data corresponding to other logical block addresses in the pre-designated range for the logical slot.
  • 12. The method of claim 9, wherein translating the logical slot to the physical slot comprises determining an offset value from a starting logical block address of the pre-designated range.
  • 13. The method of claim 9, wherein the logical slot comprises a file size of one megabyte or less.
  • 14. The method of claim 9, wherein the logical slot comprises a file size of two megabyte or less.
  • 15. The method of claim 9, wherein the logical slot comprises a file size of four megabyte or less.
  • 16. The method of claim 9, wherein the characteristic of the data includes at least one of a frequency of use of the data, whether the data can be written quickly, and whether the data pertains to an operating system of the host.
  • 17. The method of claim 9, further comprising migrating less frequently used data from the non-shingled area to the shingled area of the rotating media.
  • 18. A data storage device comprising: rotating media including a shingled area and a non-shingled area for storing data;an interface configured for communications with a host; anda controller configured to receive at least one logical block address from the host via the interface, wherein pre-designated ranges of logical block addresses are assigned to data allocated to respective logical slots corresponding to the shingled area or the non-shingled area based on a characteristic of the data, andwherein the logical slots are mapped to respective physical slots on the rotating media.
  • 19. The data storage device of claim 18, wherein the rotating media comprises a rotatable disk.
  • 20. The data storage device of claim 18, wherein the rotating media is partitioned into a plurality of zones, and wherein different zones of the plurality of zones are configured based on a predetermined physical slot size.
  • 21. The data storage device of claim 18, wherein the controller is further configured to translate logical slots to physical slots based on a direct mapped translation table specifying an offset from a starting logical block address for each logical slot.
  • 22. The data storage device of claim 18, wherein the characteristic of the data includes at least one of a frequency of use of the data, whether the data can be written quickly, and whether the data pertains to an operating system of the rotating host.
  • 23. The data storage device of claim 18, wherein the controller is further configured to migrate less frequently used data from the non-shingled area to the shingled area of the rotating media.
US Referenced Citations (114)
Number Name Date Kind
5450560 Bridges et al. Sep 1995 A
5574882 Menon et al. Nov 1996 A
5613066 Matsushima et al. Mar 1997 A
5680538 Jones et al. Oct 1997 A
5696921 Holt Dec 1997 A
6016553 Schneider et al. Jan 2000 A
6092231 Sze Jul 2000 A
6148368 DeKoning Nov 2000 A
6185063 Cameron Feb 2001 B1
6202121 Walsh et al. Mar 2001 B1
6324604 Don et al. Nov 2001 B1
6339811 Gaertner et al. Jan 2002 B1
6501905 Kimura Dec 2002 B1
6574774 Vasiliev Jun 2003 B1
6675281 Oh et al. Jan 2004 B1
6732124 Koseki et al. May 2004 B1
6772274 Estakhri Aug 2004 B1
6829688 Grubbs et al. Dec 2004 B2
6886068 Tomita Apr 2005 B2
6895468 Rege et al. May 2005 B2
6901479 Tomita May 2005 B2
6920455 Weschler Jul 2005 B1
6967810 Kasiraj et al. Nov 2005 B2
6970987 Ji et al. Nov 2005 B1
7055055 Schneider et al. May 2006 B1
7155448 Winter Dec 2006 B2
7343517 Miller et al. Mar 2008 B2
7406487 Gupta et al. Jul 2008 B1
7408731 Uemura et al. Aug 2008 B2
7412585 Uemura Aug 2008 B2
7472223 Ofer Dec 2008 B1
7486460 Tsuchinaga et al. Feb 2009 B2
7490212 Kasiraj et al. Feb 2009 B2
7509471 Gorobets Mar 2009 B2
7526614 van Riel Apr 2009 B2
7529880 Chung et al. May 2009 B2
7539924 Vasquez et al. May 2009 B1
7549021 Warren, Jr. Jun 2009 B2
7590816 Shinohara et al. Sep 2009 B2
7594067 Torabi Sep 2009 B2
7603530 Liikanen et al. Oct 2009 B1
7617358 Liikanen et al. Nov 2009 B1
7620772 Liikanen et al. Nov 2009 B1
7631009 Patel et al. Dec 2009 B1
7647544 Masiewicz Jan 2010 B1
7669044 Fitzgerald et al. Feb 2010 B2
7685360 Brunnett et al. Mar 2010 B1
7831750 Sampathkumar Nov 2010 B2
7840878 Tang et al. Nov 2010 B1
7860836 Natanzon et al. Dec 2010 B1
7870355 Erofeev Jan 2011 B2
7876769 Gupta et al. Jan 2011 B2
7885921 Mahar et al. Feb 2011 B2
7903659 Sindhu et al. Mar 2011 B2
7965465 Sanvido et al. Jun 2011 B2
7996645 New et al. Aug 2011 B2
8006027 Stevens et al. Aug 2011 B1
8214684 Hetzler et al. Jul 2012 B2
8341339 Boyle et al. Dec 2012 B1
20020049886 Furuya et al. Apr 2002 A1
20030123701 Dorrell et al. Jul 2003 A1
20030220943 Curran et al. Nov 2003 A1
20040019718 Schauer et al. Jan 2004 A1
20040109376 Lin Jun 2004 A1
20040139310 Maeda et al. Jul 2004 A1
20050069298 Kasiraj et al. Mar 2005 A1
20050071537 New et al. Mar 2005 A1
20050144517 Zayas Jun 2005 A1
20050193035 Byrne Sep 2005 A1
20060090030 Ijdens et al. Apr 2006 A1
20060112138 Fenske et al. May 2006 A1
20060117161 Venturi Jun 2006 A1
20060181993 Blacquiere et al. Aug 2006 A1
20070174582 Feldman Jul 2007 A1
20070204100 Shin et al. Aug 2007 A1
20070226394 Noble Sep 2007 A1
20070294589 Jarvis et al. Dec 2007 A1
20080098195 Cheon et al. Apr 2008 A1
20080104308 Mo et al. May 2008 A1
20080168243 Bychkov et al. Jul 2008 A1
20080183955 Yang et al. Jul 2008 A1
20080183975 Foster et al. Jul 2008 A1
20080195801 Cheon et al. Aug 2008 A1
20080209144 Fujimoto Aug 2008 A1
20080250200 Jarvis et al. Oct 2008 A1
20080256287 Lee et al. Oct 2008 A1
20080256295 Lambert et al. Oct 2008 A1
20080270680 Chang Oct 2008 A1
20080307192 Sinclair et al. Dec 2008 A1
20090019218 Sinclair et al. Jan 2009 A1
20090055620 Feldman et al. Feb 2009 A1
20090063548 Rusher et al. Mar 2009 A1
20090070529 Mee et al. Mar 2009 A1
20090119353 Oh et al. May 2009 A1
20090144493 Stoyanov Jun 2009 A1
20090150599 Bennett Jun 2009 A1
20090154254 Wong et al. Jun 2009 A1
20090164535 Gandhi et al. Jun 2009 A1
20090164696 Allen et al. Jun 2009 A1
20090193184 Yu et al. Jul 2009 A1
20090198952 Khmelnitsky et al. Aug 2009 A1
20090204750 Estakhri et al. Aug 2009 A1
20090222643 Chu Sep 2009 A1
20090235042 Petrocelli Sep 2009 A1
20090240873 Yu et al. Sep 2009 A1
20090271581 Hinrichs, Jr. Oct 2009 A1
20100011275 Yang Jan 2010 A1
20100208385 Toukairin Aug 2010 A1
20100235678 Kompella et al. Sep 2010 A1
20100281202 Abali et al. Nov 2010 A1
20100318721 Avila et al. Dec 2010 A1
20110138145 Magruder et al. Jun 2011 A1
20110167049 Ron Jul 2011 A1
20110197035 Na et al. Aug 2011 A1
Foreign Referenced Citations (1)
Number Date Country
2009102425 Aug 2009 WO
Non-Patent Literature Citations (8)
Entry
Rosenblum, Mendel and Ousterhout, John K. (Feb. 1992), “The Design and Implementation of a Log-Structured File System.” ACM Transactions on Computer Systems, vol. 10, Issue 1, pp. 26-52.
Rosenblum, “The Design and Implementation of a Log-structured File System”, EECS Department, University of California, Berkeley, Technical Report No. UCB/CSD-92-696, Jun. 1992.
Garth Gibson and Milo Polte, “Directions for Shingled-Write and Two-Dimensional Magnetic Recording System Architectures: Synergies with Solid-State Disks”, Parallel Data Lab, Carnegie Mellon Univ., Pittsburgh, PA, Tech. Rep. CMU-PDL-09-014 (2009).
Amer, Ahmed et al. (May 2010) “Design Issue for a Shingled Write Disk System” 26th IEEE Symposium on Massive Storage Systems and Technologies: Research Track.
Denis Howe, “Circular Buffer Definition”, 2010, The Free On-Line Dictionary of Computing, pp. 1-3, http://dictionary.reference.com/browse/circular+buffer.
The PC Guide, “Logical Block Addressing (LBA)”, Sep. 2,2000, pp. 1-2, http://web.archive.org/web/2000090203261 2/http://www.pcguide.com/ref/hdd/bios/modesLBA-c. html.
Robert Hinrichs, “Circular Buffer”, Sep. 11, 2002, pp. 1-22, http://www.codeproject.com/Articles/2880/Circular-Buffer.
Margaret Rouse, “Logical Block Addressing (LBA)”, Apr. 2005, pp. 1-16, http://searchcio-midmarket.techtarget.com/definition/logical-block-addressing.