Disk drive with reduced-size translation table

Information

  • Patent Grant
  • 8856438
  • Patent Number
    8,856,438
  • Date Filed
    Friday, December 9, 2011
    12 years ago
  • Date Issued
    Tuesday, October 7, 2014
    9 years ago
Abstract
A disk drive is disclosed that utilizes an additional address mapping layer between logical addresses used by a host system and physical locations in the disk drive. Physical locations configured to store metadata information can be excluded from the additional address mapping layer. As a result, a reduced size translation table can be maintained by the disk drive. Improved performance, reduced costs, and improved security can thereby be attained.
Description
BACKGROUND

1. Technical Field


This disclosure relates to disk drives and data storage systems for computer systems. More particularly, the disclosure relates to systems and methods for reducing the size of translation tables used by a disk drive or other data storage system.


2. Description of the Related Art


Disk drives typically comprise a disk and a head connected to a distal end of an actuator arm which is rotated about a pivot by a voice coil motor (VCM) to position the head radially over the disk. The disk comprises a plurality of radially spaced, concentric tracks for recording user data sectors and embedded servo sectors. The embedded servo sectors comprise head positioning information (e.g., a track address) which is read by the head and processed by a servo controller to control the velocity of the actuator arm as it seeks from track to track.


Disk drive capacity has grown by nearly six orders of magnitude since the introduction of magnetic disk storage in 1956. Current magnetic disk drives that use perpendicular recording are capable of storing as much as 400 Gbit/in2. However, this is rapidly approaching the density limit of about 1 Tbit/in2 (1000 Gbit/in2) due to the superparamagnetic effect for perpendicular magnetic recording. Accordingly, new approaches for improving storage density are needed.





BRIEF DESCRIPTION OF THE DRAWINGS

Systems and methods that embody the various features of the invention will now be described with reference to the following drawings, in which:



FIG. 1A illustrates a disk drive according to one embodiment of the invention.



FIG. 1B illustrates the format of a servo sector according to one embodiment of the invention.



FIG. 1C illustrates shingled tracks according to one embodiment of the invention.



FIG. 2 illustrates a combination of a storage system and a host system according to one embodiment of the invention.



FIG. 3 illustrates a memory map according to one embodiment of the invention.



FIG. 4A illustrates a memory map corresponding to host data according to one embodiment of the invention.



FIG. 4B illustrates a translation table corresponding to the memory map of FIG. 4A according to one embodiment of the invention.



FIG. 4C illustrates a reduced size translation table corresponding to the memory map of FIG. 4A according to one embodiment of the invention.



FIG. 5 illustrates a flow diagram for performing read operations according to one embodiment of the invention.





DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS

While certain embodiments are described, these embodiments are presented by way of example only, and are not intended to limit the scope of protection. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the scope of protection.


Overview


Shingled writing can improve the density of magnetic disk storage. Because writing data to a magnetic medium generally requires a stronger magnetic field than reading data from the magnetic medium, adjacent data tracks can be partially overlapped or “shingled” during writing. This can be achieved by using a wider and stronger magnetic field when writing data as compared to the magnetic field used when reading data. As a result, shingled writing allows for narrower tracks, which can increase the data density by a factor of about 2.3 or higher. Disk drives that utilize shingled writing are referred to as “shingled disk drives” or “shingled hard drives” throughout this disclosure.


While shingled disk drives permit random access data reading, writing is performed sequentially because writing to a single track can affect (e.g., partially or fully overwrite) data in one or more adjacent tracks. For some shingled disks, the number of affected neighboring tracks can be between 4 and 8. Accordingly, shingled disks drives provide random read access and sequential write access.


To prevent inter-track interference, shingled disk drives can be configured to write data in tracks sequentially in one direction (e.g., from the outer diameter toward the inner diameter of a disk platter) so that a previously written track is overwritten only by the next adjacent track. A shingled disk drive can implement a log-structured file system (LFS), in which write operations progress in the same radial direction so that the data tracks can overlap. New data can be written to the head of a circular buffer (e.g., current data track). During garbage collection, valid data can be relocated from the tail of the buffer (e.g., preceding data tracks) to the head of the buffer in order to free space for new data to be written.


Shingled disk drives can further utilize one or more translation tables that map logical addresses used by a host system to access a disk drive to physical locations or addresses in the disk drive. For example, the host can access the disk drive (e.g., to store and/or retrieve data) as a linear array of logical block addresses (LBAs), and the disk drive can utilize a translation table to map the logical block addresses to physical locations on a magnetic disk where host data is stored. Translation tables can be used by the disk drive to locate host data stored in the drive. In addition, the disk drive can be configured to store metadata information used by the disk drive to keep track of locations where host data is stored. For example, metadata for a particular track can include the mapping of logical addresses for which host data is stored on the track to physical addresses where host data is stored. Typically, the host system does not use metadata information. Instead, metadata information is used by the disk drive to locate host data stored in the disk drive.


In some embodiments of the present invention, a translation table of reduced size is maintained by a disk drive system. The disk drive can be configured to utilize an additional address mapping layer between the logical addresses used by the host system and physical locations in the disk drive. This additional mapping layer can relate the logical address used by the host system to another set of logical addresses utilized by the disk drive, which in turn correspond to physical locations in the disk drive. Logical addresses utilized by the disk drive exclude or skip some or all physical locations that are configured to store metadata. Accordingly, entries in the translation table can comprise logical addresses used by the host system along with corresponding logical addresses utilized by the disk drive. This approach permits the translation table to be smaller and, therefore, results in a more efficient storage and retrieval of data.


In some embodiments of the present invention, the disk drive can be configured to further reduce the size of the translation table by utilizing one or more suitable data encoding or compression schemes, such as run length compression or encoding, Huffman coding, etc. For example, the host system can store consecutive or sequential data values starting at a given logical address in the disk drive. The disk drive can be configured to store the sequential data values along with metadata, for example, in the magnetic disk storage beginning with a given physical location. According to some embodiments, the translation table can comprise a single entry that includes the given logical address used by the host system as the starting address, a logical address utilized by the disk drive corresponding to the given physical location where a first data value of the sequence is stored, and a number of data values in the sequence. Using this additional mapping layer that excludes some or all physical locations configured to store metadata allows the disk drive to reduce the size of the translation table by taking advantage of run length encoding when a consecutive run of host data is stored in the disk drive.


In some embodiments of the present invention, the disk drive can be configured to retrieve data requested by the host by using the translation table. The disk drive can determine a logical address utilized by the disk drive that corresponds to the logical address specified by host in connection with a read data operation. The disk drive can convert the logical address utilized by the disk drive to a corresponding physical location in accordance with a predetermined conversion process. For example, when metadata information is stored in predetermined locations in the disk drive (e.g., in starting and middle locations of each track), the conversion process can be a mapping, such as a simple transformation, of logical addresses utilized by the disk drive to physical addresses.


System Overview



FIG. 1 illustrates a disk drive 100 according to one embodiment of the invention. Disk drive 100 comprises a disk 102, an actuator arm 4, and a head 104 actuated radially over the disk 102 and connected to a distal end of the actuator arm 4. The disk 102 comprises a plurality of data tracks 10 and a controller 14 configured to control the actual arm 4 to position the head 104 over a target track.


In one embodiment, the disk 102 comprises a plurality of servo sectors 240-24N that define the plurality of data tracks 10. The controller 14 processes the read signal to demodulate the servo sectors 240-24N into a position error signal (PES). The PES is filtered with a suitable compensation filter to generate a control signal 26 applied to a voice coil motor (VCM) 110 which rotates the actuator arm 4 about a pivot in order to position the head 104 radially over the disk 102 in a direction that reduces the PES. The servo sectors 240-24N may comprise any suitable position information, and in one embodiment, as is shown in FIG. 1B, each servo sector comprises a preamble 30 for synchronizing gain control and timing recovery, a sync mark 32 for synchronizing to a data field 34 comprising coarse head positioning information such as a track number, and servo bursts 36 which provide fine head positioning information. The coarse head position information is processed to position a head over a target track during a seek operation, and the servo bursts 36 are processed to maintain the head over a centerline of the target track while writing or reading data during a tracking operation.


In one embodiment, as is illustrated in FIG. 1C, the disk drive 100 is a shingled disk drive. A set 20 of data tracks 10 is depicted as being partially overlapped. For shingled disk drives, the head 104 can be configured to use a wider and stronger magnetic field when writing data as compared to the magnetic field used when reading data. Because a wider and stronger magnetic field is used when writing data, one or more adjacent tracks can be affected (e.g., partially or fully overwritten) when data is written to a particular track. Accordingly, while the data tracks 10 are not physically overlapping, they are represented as such in set 20 of FIG. 1C in order to illustrate that these tracks can be subjected to inter-track interference during writing.



FIG. 2 illustrates a combination 200 of a storage system and a host system according to one embodiment of the invention. As is shown, a disk drive 220 (e.g., a shingled disk drive) includes a controller 230 and a magnetic storage module 260. The disk drive 220 optionally includes a non-volatile storage memory module 250. The magnetic storage module 260 comprises magnetic media 264 (e.g., one or more disks 102) and a cache buffer 262. The non-volatile memory module 250 can comprise one or more non-volatile solid-state memory arrays, such as flash integrated circuits, Chalcogenide RAM (C-RAM), Phase Change Memory (PC-RAM or PRAM), Programmable Metallization Cell RAM (PMC-RAM or PMCm), Ovonic Unified Memory (OUM), Resistance RAM (RRAM), NAND memory, NOR memory, EEPROM, Ferroelectric Memory (FeRAM), or other discrete NVM (non-volatile memory) chips. The cache buffer 262 can comprise volatile storage (e.g., DRAM, SRAM, etc.), non-volatile storage (e.g., non-volatile solid-state memory, etc.), or any combination thereof. In one embodiment, the cache buffer 262 can store system data, such as translation tables.


The controller 230 can be configured to receive data and/or storage access commands from a storage interface module 212 (e.g., a device driver) of a host system 210. Storage access commands communicated by the storage interface 212 can include write and read commands issued by the host system 210. Read and write commands can specify a logical address (e.g., LBA) in the disk drive 220. The controller 230 can execute the received commands in the non-volatile memory module 250 and/or in the magnetic storage module 260.


Disk drive 220 can store data communicated by the host system 210. In other words, the disk drive 220 can act as memory storage for the host system 210. To facilitate this function, the controller 230 can implement a logical interface. The logical interface can present to the host system 210 disk drive's memory as a set of logical addresses (e.g., contiguous address) where host data can be stored. Internally, the controller 230 can map logical addresses to various physical locations or addresses in the magnetic media 264 and/or the non-volatile memory module 250. In some embodiments, the controller 230 can map logical addresses used by the host system to logical addresses utilized by the disk drive (e.g., logical addresses exclude or skip physical locations configured to store metadata information), which in turn correspond to physical locations in the disk drive.


Examples of Memory Mapping


FIG. 3 illustrates a memory map 300 according to one embodiment of the invention. Although FIG. 3 illustrates a mapping for a shingled disk drive comprising magnetic storage, the mapping can be applied to other types of non-volatile storage, such as magnetic storage that does not utilize shingled techniques, solid-state storage, etc. The map 300 depicts tracks 0 through 3 divided into half track parts, as illustrated in rows B through G. The map 300 further depicts in columns L through R physical addresses (or absolute block addresses (ABA)) in the magnetic media 264 along with logical addresses utilized by the disk drive 220. Although these logical addresses are referred to as “ShABA” (or shingled absolute block addresses), those of ordinary skill in the art will recognize that other reference names and/or abbreviations may be used.


In the embodiment illustrated in FIG. 3, metadata or write log information is stored or configured to be stored in physical locations corresponding to the beginning and middle of a track in the magnetic media 264. Those of ordinary skill in the art will appreciate that metadata can be stored in any predetermined or random physical locations in the disk drive. Location 302 depicts the first physical location in track 0, which is configured to store metadata. This location corresponds to ABA 0. Because logical addresses utilized by the disk drive are configured to exclude or skip physical locations configured to store metadata, there is no ShABA address corresponding to location 302. The next location in the track, which is depicted by the intersection of row B and column M, corresponds to ABA 1 and ShABA 0. Location depicted by the intersection of row B and column N corresponds to ABA 2 and ShABA 1. Likewise, location depicted by the intersection of row B and column O corresponds to ABA 3 and ShABA 2. The next location, which is depicted as location 306 at the intersection of row B and column P, corresponds to ABA 4 and ShABA 3. In one embodiment, this location corresponds to a location that is defective. In other words, the disk drive does not store data in location 306. Those of ordinary skill in the art will appreciate that the disk drive may have no defective locations or may have more than one defective location.


Row C of FIG. 3 depicts addresses corresponding to locations in the next half track (i.e., second half of track 0). The first location of the half track 304 (i.e., location depicted by the intersection of row C and column L) corresponds to ABA 7. Because this location is configured to store metadata, it has no corresponding ShABA address. Subsequent locations in the half track have corresponding ABA and ShABA addresses as illustrated in FIG. 3. Rows D and E depict addresses corresponding to locations in the next track, track 1. Rows F and G depict addresses corresponding to locations in track 2. Memory map 300 illustrates physical addresses (e.g., ABA) 0 through 41 and corresponding logical addresses (e.g., ShABA) 0 through 35.



FIG. 4A illustrates a memory map 400A corresponding to host data according to one embodiment of the invention. In particular, the memory map 400A corresponds to a memory layout for twenty-five sequential host data values beginning with LBA 100 and ending with LBA 124. This data can be stored in the disk drive in response to a write command received from the host. As is illustrated, these sequential host data values are stored in the disk drive 220 starting at a free physical location 402, which corresponds to ABA 1 and ShABA 0. In particular, physical location 402 stores host data associated with LBA 0. The next two physical locations (i.e., locations depicted by the intersections of row B and columns N and O respectively) store host data associated with LBA 1 and 2 respectively. Physical location 406 depicted by the intersection of row B and column P corresponds to a defective location, and, in one embodiment, no host data is stored there. Subsequent locations in the half track store host data associated with LBA 103 and 104, as is illustrated in FIG. 4A.


In the embodiment illustrated in FIG. 4A, metadata information is stored in physical locations corresponding to the beginning and middle of a track. Those of ordinary skill in the art will appreciate that metadata can be stored in any predetermined or random physical locations. Physical location 404, which corresponds to the start of half track and ABA 7, is configured to store metadata. Physical location 404 does not have a corresponding ShABA address. This location is also not configured to store host data.


In one embodiment, as is illustrated by the table stored in location 404, metadata information includes a mapping between logical addresses for which host data is stored on the half track and physical locations in the half track. Metadata information can be generated and stored in the disk drive (e.g., in the magnetic media 264) by the controller 230 when a write command received from the host is processed (e.g., host data is written to the disk drive). As is illustrated, metadata information includes a mapping for the previous half track. In other words, the mapping stored in location 404 depicts the data layout of the first half of track 0. In particular, the entry in row BB indicates that a sequence of three host data values beginning with LBA 100 (i.e., values corresponding to LBA 100-102) is stored in physical locations starting at ABA 1. Similarly, the entry in row CC indicates that a sequence of two host data values beginning with the next LBA in the sequence (LBA 103) is stored in physical locations starting at ABA 5. As is explained above, no host data is stored in the defective location 406 corresponding to ABA 4. Those of ordinary skill in the art will appreciate that metadata information can include a mapping for the current half track, current track, plurality of current or previous half tracks, tracks, etc. In addition, those of ordinary skill in the art will appreciate other suitable mapping techniques or combinations of such techniques can be used to generate metadata or write log information.


Row C of FIG. 4A depicts the layout for the remainder of the half track (i.e., second half of track 0). Rows D and E depict the layout for the next track, track 1. Rows F depicts the layout for the first half of track 2. Location 408, which corresponds to ABA 30 and ShABA 25, in the first half of track 2 stores the data value associated with LBA 124. Subsequent physical locations in track 2 are depicted as being empty (i.e., these locations do not store valid host data).


Reduced Size Translation Table



FIG. 4B illustrates a translation table 400B corresponding to the memory map of FIG. 4A according to one embodiment of the invention. Translation table 400B can be used by the disk drive 220 (e.g., by the controller 230) to locate host data stored in the disk drive. The controller 230 can be configured to generate and/or update the translation table 400B when processing write operation(s) received from the host system. In one embodiment, the controller 230 generates the translation table 400B when user data is initially stored in the disk drive. When subsequent write operations are received from the host system, the controller can update the translation table 400B. In particular, when host data specified by a write command is being stored in the disk drive, the controller 230 can generate and store metadata information, which can be interspersed with host data. The controller 230 can generate and/or update the translation table 400B in conjunction with storing host data in the disk drive. The controller 230 can generate and/or update the translation table 400B before storing host data, while storing host data, or after storing host data.


With reference to FIG. 4B, as is explained above, twenty-five sequential host data values beginning with LBA 100 and ending with LBA 124 are stored in the disk drive starting at a free physical location 402, which corresponds to ABA 1 and ShABA 0. Table 400B reflects the mapping between logical addresses used by the host system and physical locations where data corresponding to the logical addresses is stored. In addition, table 400B utilizes run length encoding for consecutive or sequential blocks of host data. Although a single translation table is illustrated, those of ordinary skill in the art will recognize that other translation tables can be utilized by the disk drive. In addition, although the translation table is depicted as a table, those of ordinary skill in the art will recognize that any suitable data structure can be used, such as a one dimensional array, multi-dimensional array, linked list, tree, graph, and the like.


With reference to FIG. 4A, three consecutive values corresponding to LBA 100-102 are stored in locations having ABA addresses 1-3, which is reflected by the entry in row B of FIG. 4B. No host data is stored in the defective location corresponding to ABA 4. The remainder of the half track comprises two consecutive data values corresponding to LBA 103 and 104 stored in locations having ABA addresses 5 and 6, which is reflected by the entry in row C of FIG. 4B. The next half track comprises six consecutive data values corresponding to LBA 105-110 stored in locations having ABA addresses 8-13, which is reflected by the entry in row D of FIG. 4B. Similarly, the next half track comprises six consecutive data values corresponding to LBA 111-116 stored in locations having ABA addresses 15-20, which is reflected by the entry in row E of FIG. 4B. Likewise, next half track comprises six consecutive data values corresponding to LBA 117-122 stored in locations having ABA addresses 22-27, which is reflected by the entry in row F of FIG. 4B. The last two data values corresponding to LBA 123 and 124 are stored in locations with ABA addresses 29 and 30, which is reflected by the entry in row G of FIG. 4B.


As is illustrated, physical locations configured to store metadata fragment or break-up entries in the translation table 400B. In other words, because metadata is interspersed with host data stored on the disk drive, sequential runs of host data are broken up by physical locations that store metadata. This fragmentation is reflected in table 400B. In addition, entries in table 400B are broken-up by the defective location 406 corresponding to ABA 4.



FIG. 4C illustrates a reduced size translation table 400C corresponding to the memory map of FIG. 4A according to one embodiment of the invention. As is illustrated, table 400C utilizes logical addresses used by the disk drive 220 (e.g., ShABA) instead of physical addresses (e.g., ABA). Because logical addresses used by the disk drive are configured to exclude or skip locations configured to store metadata, the size of table 400C is smaller than the size of table 400B. Using logical addressing (e.g., ShABA addressing) allows the disk drive to rejoin or defragment consecutive runs of host data values that are fragmented by metadata intermingled with host data on the disk drive (e.g., in the magnetic media 264). In other words, because logical addressing used by the disk drive does not account for physical locations configured to store metadata, stored metadata does not break up consecutive runs of host data. In one embodiment, logical addressing can exclude or skip metadata that is stored in physical locations according to a known or predetermined pattern (e.g., at the beginning and middle of each track). In another embodiment, logical addressing can in addition exclude or skip metadata that is stored in quasi-random or random physical locations.


As is illustrated in FIG. 4C, translation table 400C comprises two entries. The entry depicted in row B corresponds to a run of consecutive host data values associated with LBA 100-102 (3 consecutive data values), which are stored in physical locations having ShABA addresses 0-2. No host data is stored in the defective location corresponding to ShABA 3. The remaining host data values associated with LBA 103-124 (22 consecutive data values) are stored in physical locations having ShABA addresses 4-25, as is depicted in row C and in FIG. 4A. Accordingly, the translation table 400B, which comprises six entries, can be reduced to a two-entry translation table 400C, which captures the same information as that captured by the table 400B.


In one embodiment, the translation table (e.g., table 400B, 400C, etc.) can be stored or cached in volatile memory, such as cache buffer 262 (which can be implemented in DRAM in one embodiment), which can provide faster access to the table as compared to the table being stored only in the magnetic media 264. In another embodiment, the translation table can be stored in non-volatile memory 250. The disk drive can be further configured to store a working copy of the translation table in volatile memory, such as cache buffer 262 or non-volatile memory 250, while storing a second (e.g., backup) copy of the translation table in the magnetic media 264. When multiple copies of the translation table are used, the disk drive can be configured to periodically save the working copy of the translation table. For example, the controller 230 can be configured to save the working copy in a reserved area of the magnetic media 264.


Example of Using a Reduced Size Translation Table


FIG. 5 illustrates a flow diagram 500 for performing read operations according to one embodiment. The process 500 can be executed by the controller 230. The process begins at 502 when the disk drive 220 receives a read command from the host system 210, the read data command being associated with a starting LBA address and a count of consecutive data values to be retrieved. In block 504 the process 500 sets a counter of retrieved data values to zero and transitions to block 506.


In block 506, the process 500 determines a logical address utilized by the disk drive (e.g., ShABA address) that corresponds to the starting LBA address associated with the read data command. In one embodiment, the process 500 can search the translation table 400C (e.g., perform a look-up operation) and determine the logical address corresponding to the starting LBA address. The process transitions to block 508 where it determines the number of consecutive host data values that follow the logical address corresponding to the starting LBA address (including the data value corresponding to the logical address).


In block 510, the process determines whether the count of consecutive data values to be retrieved is smaller than or equal to than the number of consecutive host data values that follow the logical address corresponding to the starting LBA address. If the count of consecutive data values is determined to be smaller or equal than the number of consecutive host data values, the process transitions to block 512. Otherwise, the process transitions to block 514.


In block 512, the process 500 retrieves data values consecutively stored beginning with the logical address corresponding to the starting LBA address. The number of consecutive data values retrieved corresponds to the count specified in block 502. In one embodiment, the process 500 can convert the logical address (e.g., ShABA address) corresponding to the starting LBA address to a physical address (e.g., ABA address) by using a predetermined conversion process. For example, as is explained above, logical addresses utilized by the disk drive can be mapped to corresponding physical addresses in the disk drive 220. Further, when metadata information is stored according to a known or predetermined pattern (e.g., in starting and middle locations of each track), the conversion process can be a simple transformation of logical addresses utilized by the disk drive to physical addresses. For mapping illustrated in FIG. 4A, this transformation satisfies the following equation:

Physical address=Logical address utilized by disk drive+n  (1)


where n corresponds to a half track number (selected from the set [1, N]) for the logical address utilized by the disk drive. Starting at the physical address corresponding to the logical address utilized by the disk drive, the process 500 retrieves a number of consecutively stored data values corresponding to the count specified in block 502. When the process completes the retrieval of host data, it transitions to 520 where data is communicated (e.g., returned) to the host system.


If the process 500 determines in block 510 that the count specified in block 502 is greater than the number of consecutive host data values that follow the logical address corresponding to the starting LBA address, the process transitions to block 514. In block 514, the process 500 converts the logical address corresponding to the starting LBA address to a physical address and retrieves host data stored in the disk drive beginning with that physical address. The process retrieves a number of consecutively stored data values corresponding to the number determined in block 508, and updates (e.g., increments) the counter of retrieved data values. The process transitions to block 516 where it determines whether the number of retrieved data values (as tracked by the counter) equals or exceeds the count specified in block 502. If the process determines that all data values requested by the host system have been retrieved, it transitions to block 520 where data is communicated (e.g., returned) to the host system.


If in block 516 the process 500 determines that it needs to retrieve more host data values stored in the disk drive, the process transitions to block 518. In block 518, the process determines the next logical address utilized by the disk drive. In one embodiment, the process 500 can search the translation table 400C and determine such next logical address. For example, the process can determine the next logical address as follows:


starting LBA address specified in block 502+number of consecutively stored data values determined in block 508.


Alternatively, the process 500 can determine the next logical address by looking-up the next entry in the table 400C. The process also determines the number of consecutive host data values that follow the next logical address (including the data value corresponding to the next logical address). The process transitions to block 514 where data is retrieved as explained above.


CONCLUSION

By using an additional address mapping layer between the logical addresses used by the host system and physical locations in the disk drive, a translation table of reduced size can maintained by the disk drive. In addition, the use of data encoding or compression schemes, such as run length encoding, can further reduce the size of the translation table. Maintaining a reduced-size translation table can result in more efficient data storage and retrieval. In addition, when a working copy of the translation table is stored in volatile memory, such as DRAM, volatile memory of a smaller size can be used, which results in cost savings. Moreover, by excluding or skipping physical locations configured to store metadata information from the additional mapping layer, metadata can be better concealed from the host system. For example, the host system may need to communicate a special (e.g., non-standard command) to access physical locations configured to store metadata information. This can provide better security for data stored in the disk drive.


OTHER VARIATIONS

Those skilled in the art will appreciate that in some embodiments, other techniques to reduce the size of translation tables can be implemented. In addition, the actual steps taken in the disclosed processes, such as the process shown in FIG. 5, may differ from those shown in the figures. Depending on the embodiment, certain of the steps described above may be removed, others may be added. Accordingly, the scope of the present disclosure is intended to be defined only by reference to the appended claims.


While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the protection. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the protection. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the protection. For example, the systems and method disclosed herein can be applied to hybrid hard drives, solid state drives, and the like. In addition, other forms of storage (e.g., DRAM or SRAM, battery backed-up volatile DRAM or SRAM devices, EPROM, EEPROM memory, etc.) may additionally or alternatively be used. As another example, the various components illustrated in the figures may be implemented as software and/or firmware on a processor, ASIC/FPGA, or dedicated hardware. Also, the features and attributes of the specific embodiments disclosed above may be combined in different ways to form additional embodiments, all of which fall within the scope of the present disclosure. Although the present disclosure provides certain preferred embodiments and applications, other embodiments that are apparent to those of ordinary skill in the art, including embodiments which do not provide all of the features and advantages set forth herein, are also within the scope of this disclosure. Accordingly, the scope of the present disclosure is intended to be defined only by reference to the appended claims.

Claims
  • 1. A method for processing storage operations received from a host system, the method performed by a disk drive system comprising a plurality of physical memory locations configured to store at least one of data received from the host system and metadata, the method comprising: storing data received from the host system in a first set of physical memory locations;storing metadata in a second set of physical memory locations designated for storing metadata and not data received from the host system;maintaining a translation table configured to map a set of logical addresses to the first set of physical memory locations, the translation table having a reduced size due to excluding the second set of physical memory locations designated for storing metadata from the translation table; andin response to receiving from the host system a read operation associated with a logical address, using the reduced size translation table to retrieve data stored in a physical memory location in the disk drive corresponding to the logical address and communicating the retrieved data to the host system.
  • 2. The method of claim 1, wherein using the translation table to retrieve data further comprises: looking up in the translation table a value that corresponds to the logical address; andconverting the value to the physical memory location in the disk drive corresponding to the logical address in accordance with a predetermined conversion process.
  • 3. The method of claim 1, further comprising reducing the size of the translation table by: in response to receiving from a host system a write operation associated with a starting logical address and a plurality of consecutive host data units, creating a single translation table entry comprising the starting logical address, a value corresponding to the starting logical address, and a count of the number of data units in the plurality of consecutive host data units, wherein the value corresponds to a starting physical memory location in the disk drive.
  • 4. The method of claim 3, wherein the plurality of consecutive host data units and metadata associated with the plurality of consecutive host data units are stored in a plurality of consecutive physical memory locations beginning at the starting physical location.
  • 5. The method of claim 1, wherein metadata is stored in predetermined physical memory locations in the disk drive.
  • 6. The method of claim 5, wherein the disk drive system further comprises magnetic storage and metadata is stored in predetermined physical memory locations in the magnetic storage.
  • 7. The method of claim 1, wherein the disk drive system comprises at least one track and metadata is stored in physical memory locations corresponding to a start and middle of the at least one track.
  • 8. A shingled disk drive configured to process at least read and write operations received from a host system, the shingled disk drive comprising: magnetic storage comprising a plurality of physical memory locations configured to store data, wherein a first set of physical memory locations is designated for storing data received from a host system and a second set of physical memory locations is designated for storing metadata and not data received from the host system; anda controller configured to: process a read operation associated with a starting logical address, the starting logical address corresponding to a physical memory location of the plurality of physical memory locations in the magnetic storage;determine the physical memory location corresponding to the starting logical address using a translation table;retrieve data stored in the physical memory location; andcommunicate the retrieved data to the host system,wherein the translation table comprises a plurality of entries mapping a set of logical addresses to the first set of physical memory locations and the translation table has a reduced size due to exclusion of the second set of physical memory locations designated for storing metadata from the translation table.
  • 9. The shingled disk drive of claim 8, wherein the controller is further configured to: look up in the translation table a value that corresponds to the starting logical address; andconvert the value to the physical memory location corresponding to the starting logical address in accordance with a predetermined conversion process.
  • 10. The shingled disk drive of claim 8, wherein the controller is further configured to: process a write operation associated with a logical address and a plurality of host data units;determine a starting physical memory location for storing the plurality of consecutive host data units; andwrite the plurality of consecutive host data units along with metadata information in consecutive physical memory locations beginning at the starting physical memory location.
  • 11. The shingled disk drive of claim 10, wherein the starting physical memory location corresponds to a free physical memory location designated for storing data received from the host system.
  • 12. The shingled disk drive of claim 10, wherein the controller is further configured to reduce the size of the translation table by updating the translation table with a single entry comprising the logical address, a value corresponding to the starting physical memory location, and a count of the number of data units in the plurality of consecutive data units.
  • 13. The shingled disk drive of claim 8, wherein metadata is stored in predetermined physical memory locations in the magnetic storage.
  • 14. The shingled disk drive of claim 8, wherein the magnetic storage comprises at least one track and metadata is stored in physical memory locations corresponding to a start and middle of the at least one track.
  • 15. The shingled disk drive of claim 8, wherein: at least some memory locations in the first set of physical memory locations in the magnetic storage comprise defective memory locations where data is not stored; andentries in the translation table do not refer to any defective memory locations.
  • 16. The method of claim 1, wherein: in the reduced size translation table, first and second physical memory locations storing data received from the host system are represented as consecutive locations, the first and second physical memory locations being physically separated in a disk drive memory by a physical memory location designated for storing metadata.
  • 17. The method of claim 1, further comprising reducing the size of the translation table by applying run length encoding to create a single translation table entry corresponding to a plurality of consecutive host data units received from the host system for storage in the disk drive.
  • 18. The shingled disk drive of claim 8, wherein: in the reduced size translation table, first and second physical memory locations storing data received from the host system are represented as consecutive locations, the first and second physical memory locations being physically separated in the magnetic storage by a physical memory location designated for storing metadata.
  • 19. The shingled disk drive of claim 8, wherein the controller is further configured to reduce the size of the translation table by applying run length encoding to create a single translation table entry corresponding to a plurality of consecutive host data units received from the host system for storage in the magnetic storage.
US Referenced Citations (133)
Number Name Date Kind
4769770 Miyadera et al. Sep 1988 A
4992936 Katada et al. Feb 1991 A
5121480 Bonke et al. Jun 1992 A
5293282 Squires et al. Mar 1994 A
5613066 Matsushima et al. Mar 1997 A
5983309 Atsatt et al. Nov 1999 A
6092231 Sze Jul 2000 A
6105104 Guttmann et al. Aug 2000 A
6182250 Ng et al. Jan 2001 B1
6182550 Brewington et al. Feb 2001 B1
6202121 Walsh et al. Mar 2001 B1
6240501 Hagersten May 2001 B1
6324604 Don et al. Nov 2001 B1
6339811 Gaertner et al. Jan 2002 B1
6411454 Monroe, III Jun 2002 B1
6556365 Satoh Apr 2003 B2
6574774 Vasiliev Jun 2003 B1
6636049 Lim et al. Oct 2003 B1
6690538 Saito et al. Feb 2004 B1
6728054 Chng et al. Apr 2004 B2
6735032 Dunn et al. May 2004 B2
6772274 Estakhri Aug 2004 B1
6829688 Grubbs et al. Dec 2004 B2
6886068 Tomita Apr 2005 B2
6895468 Rege et al. May 2005 B2
6901479 Tomita May 2005 B2
6920455 Weschler Jul 2005 B1
6956710 Yun et al. Oct 2005 B2
6967810 Kasiraj et al. Nov 2005 B2
6980386 Wach et al. Dec 2005 B2
6992852 Ying et al. Jan 2006 B1
7012771 Asgari et al. Mar 2006 B1
7035961 Edgar et al. Apr 2006 B2
7046471 Meyer et al. May 2006 B2
7076391 Pakzad et al. Jul 2006 B1
7082007 Liu et al. Jul 2006 B2
7089355 Auerbach et al. Aug 2006 B2
7113358 Zayas et al. Sep 2006 B2
7120726 Chen et al. Oct 2006 B2
7155448 Winter Dec 2006 B2
7199981 Zabtcioglu Apr 2007 B2
7254671 Haswell Aug 2007 B2
7283316 Chiao et al. Oct 2007 B2
7298568 Ehrlich et al. Nov 2007 B2
7330323 Singh et al. Feb 2008 B1
7343517 Miller et al. Mar 2008 B2
7408731 Uemura et al. Aug 2008 B2
7412585 Uemura Aug 2008 B2
7436610 Thelin Oct 2008 B1
7436614 Uchida Oct 2008 B2
7440224 Ehrlich et al. Oct 2008 B2
7486460 Tsuchinaga et al. Feb 2009 B2
7490212 Kasiraj et al. Feb 2009 B2
7509471 Gorobets Mar 2009 B2
7516267 Coulson et al. Apr 2009 B2
7529880 Chung et al. May 2009 B2
7539924 Vasquez et al. May 2009 B1
7603530 Liikanen et al. Oct 2009 B1
7647544 Masiewicz Jan 2010 B1
7669044 Fitzgerald et al. Feb 2010 B2
7685360 Brunnett et al. Mar 2010 B1
7840878 Tang et al. Nov 2010 B1
7860836 Natanzon et al. Dec 2010 B1
7885921 Mahar et al. Feb 2011 B2
7900037 Fallone et al. Mar 2011 B1
7982993 Tsai et al. Jul 2011 B1
8006027 Stevens et al. Aug 2011 B1
8031423 Tsai et al. Oct 2011 B1
8116020 Lee Feb 2012 B1
8179627 Chang et al. May 2012 B2
8194340 Boyle et al. Jun 2012 B1
8194341 Boyle Jun 2012 B1
8341339 Boyle et al. Dec 2012 B1
8443167 Fallone et al. May 2013 B1
8560759 Boyle et al. Oct 2013 B1
8687306 Coker Apr 2014 B1
8693133 Lee et al. Apr 2014 B1
8699185 Teh Apr 2014 B1
20010042166 Wilson et al. Nov 2001 A1
20030065872 Edgar et al. Apr 2003 A1
20030220943 Curran et al. Nov 2003 A1
20040019718 Schauer et al. Jan 2004 A1
20040109376 Lin Jun 2004 A1
20050069298 Kasiraj et al. Mar 2005 A1
20050071537 New et al. Mar 2005 A1
20050138265 Nguyen et al. Jun 2005 A1
20050144517 Zayas Jun 2005 A1
20050157416 Ehrlich et al. Jul 2005 A1
20060090030 Ijdens et al. Apr 2006 A1
20060112138 Fenske et al. May 2006 A1
20060117161 Venturi Jun 2006 A1
20060181993 Blacquiere et al. Aug 2006 A1
20070016721 Gay Jan 2007 A1
20070067603 Yamamoto et al. Mar 2007 A1
20070174582 Feldman Jul 2007 A1
20070204100 Shin et al. Aug 2007 A1
20070226394 Noble Sep 2007 A1
20070245064 Liu Oct 2007 A1
20070288686 Arcedera et al. Dec 2007 A1
20070294589 Jarvis et al. Dec 2007 A1
20080098195 Cheon et al. Apr 2008 A1
20080104308 Mo et al. May 2008 A1
20080183955 Yang et al. Jul 2008 A1
20080195801 Cheon et al. Aug 2008 A1
20080256287 Lee et al. Oct 2008 A1
20080256295 Lambert et al. Oct 2008 A1
20080270680 Chang Oct 2008 A1
20080307192 Sinclair et al. Dec 2008 A1
20090019218 Sinclair et al. Jan 2009 A1
20090043985 Tuuk et al. Feb 2009 A1
20090055620 Feldman et al. Feb 2009 A1
20090063548 Rusher et al. Mar 2009 A1
20090119353 Oh et al. May 2009 A1
20090150599 Bennett Jun 2009 A1
20090154254 Wong et al. Jun 2009 A1
20090164535 Gandhi et al. Jun 2009 A1
20090164696 Allen et al. Jun 2009 A1
20090187732 Greiner et al. Jul 2009 A1
20090193184 Yu et al. Jul 2009 A1
20090198952 Khmelnitsky et al. Aug 2009 A1
20090204750 Estakhri et al. Aug 2009 A1
20090222643 Chu Sep 2009 A1
20090240873 Yu et al. Sep 2009 A1
20090271581 Hinrichs, Jr. Oct 2009 A1
20090276604 Baird et al. Nov 2009 A1
20100011275 Yang Jan 2010 A1
20100061150 Wu et al. Mar 2010 A1
20100161881 Nagadomi et al. Jun 2010 A1
20100169543 Edgington et al. Jul 2010 A1
20100169551 Yano et al. Jul 2010 A1
20100208385 Toukairin Aug 2010 A1
20110167049 Ron Jul 2011 A1
20110304935 Chang et al. Dec 2011 A1
Foreign Referenced Citations (1)
Number Date Country
2009102425 Aug 2009 WO
Non-Patent Literature Citations (6)
Entry
Amer, Ahmed et al. (May 2010) “Design Issue for a Shingled Write Disk System” 26th IEEE Symposium on Massive Storage Systems and Technologies: Research Track.
Rosenblum, (Jun. 1992) “The Design and Implementation of a Log-structured File System”, EECS Department, University of California, Berkeley, Technical Report No. UCB/CSD-92-696, pp. 1-93.
Rosenblum, Mendel and Ousterhout, John K. (Feb. 1992), “The Design and Implementation of a Log-Structured File System.” ACM Transactions on Computer Systems, vol. 10, Issue 1, pp. 26-52.
Definition of adjacent, Merriam-Webster Dictionary, retrieved from http://www.merriam-webster.com/dictionary/adjacent on Oct. 30, 2013 (1 page).
Re:Hard drive Inner or Outer tracks???, Matthias99, Apr. 12, 2004, retrieved from http://forums.anandtech.com/showthread.php?p=11 055300 on Oct. 29, 2013.
You Don't Know Jack about Disks, Dave Anderson, Seagate Technologies, Queue—Storage Queue, vol. 1, issue 4, Jun. 2003, pp. 20-30 (11 pages).