Data storage devices are used to access digital data in a fast and efficient manner. At a host level, user data are often structured in terms of variable length files, which can be constituted from one or more fixed sized logical blocks (such as logical block addresses, LBAs).
To store or retrieve user data with an associated data storage device, host commands are generally issued to the device using a logical block convention. The device carries out an internal conversion of the LBAs to locate the associated physical blocks (e.g., sectors) of media on which the data are to be stored, or from which the data are to be retrieved.
When the data storage device is characterized as a disc drive, a controller may be used to execute a seek command to move a data transducer adjacent a rotating recording disc and carry out the data transfer operation with the associated physical sector(s). Other types of data storage devices (for example, solid state data storage devices that have no moving parts) generally carry out other types of access operations to transfer the associated data.
With continued demands for data storage devices with ever higher data storage and data transfer capabilities for a wide range of data types, there remains a continual need for improvements in a manner in which data associated with LBAs are transferred from a host to a data storage device, and a manner in which storage of the transferred data is managed within the data storage device. It is to these and other improvements that the present embodiments are generally directed.
In a particular embodiment, a method is disclosed that includes receiving, in a data storage device, at least one data packet that has a size that is different from an allocated storage capacity of at least one physical destination location on a data storage medium in the data storage device for the at least one data packet. The method also includes storing the at least one received data packet in a non-volatile cache memory prior to transferring the at least one received data packet to the at least one physical destination location.
In another particular embodiment, a method is disclosed that includes temporarily storing received multiple data packets in a first cache memory in a data storage device prior to storing the multiple data packets in respective physical destination locations in the data storage device. The method also includes transferring the plurality of data packets and any existing data in the respective physical destination locations to a second cache memory.
In yet another particular embodiment, a device that includes a first cache memory, a non-volatile memory of a different type than the first cache memory, and a controller is disclosed. The controller is configured to temporarily store received data packets in the non-volatile memory. The controller is also configured to, when the received data packets reach a predetermined number of data packets, transfer the predetermined number of data packets to the first cache memory prior to storing the predetermined number of data packets in respective physical destination locations in the device.
In the following detailed description of the embodiments, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration of specific embodiments. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present disclosure.
The disclosure is related, in a particular example, to data storage management in systems with one or more storage entities. In one example, a storage entity can be a component that includes one or more memories accessible to a controller that is external to the storage entity. The systems and methods described herein are particularly useful for memory systems that employ data storage discs; however, the systems and methods described herein can be applied to any type of memory system, for example, to improve data storage management.
Referring to
The memory device 108 may include a controller 110, which may be coupled to the processor 102 via a connection through the system bus 103. In one embodiment, the memory device 108 comprises at least one storage entity 112. In a particular embodiment, storage entity 112 includes permanent storage locations 113 and cache memory locations 114. The cache memory locations 114 and the permanent storage locations 113 may be on a common storage medium or may be on separate storage media within storage entity 112.
During operation, the processor 102 may send commands and data to the memory device 108 to retrieve or store data. The controller 110 can receive the commands and data from the processor 102 and then manage the execution of the commands to store or retrieve data from storage entity 112.
In some embodiments, write commands received in memory device 108 from processor 102 or any other suitable sending interface include data addressed by logical block addresses (LBAs). Device 108 processes the received commands and ultimately stores the data accompanying the received commands in respective ones of the permanent storage locations 113, which are typically mapped to LBAs. In some such embodiments, individual ones of the LBAs may be substantially permanently associated with individual ones of the permanent storage locations 113. In other such embodiments, the LBAs are mutably associated with the permanent storage locations 113. For various reasons, some of which are described further below, performance of device 108 may be optimized by first temporarily storing the multiple received write commands in locations within cache memory 114, for example, and at a later time (for example, when device 108 is idle), transferring the data into permanent storage locations 113. In some embodiments, the transfer of the data associated with the multiple received write commands from the cache 114 is carried out in a particular manner. In one embodiment, a subset of the of the multiple write commands is selected for transfer to the permanent storage locations 113 based on proximity between LBAs of different ones of the multiple write commands. Further, the subset of the write commands may be executed in an order based on proximity between permanent storage locations 113 for individual ones of the subset of the write commands. In general, in such embodiments, the process of storing data included in received write commands is substantially optimized by grouping the received commands into different subsets based on proximity between LBAs of different ones of the multiple write commands, and then executing the different subsets of write commands based on proximity between permanent storage locations 113 on a data storage medium for individual ones of the respective subsets of the write commands.
In accordance with some embodiments, data sent from a sending interface to a device such as 108 may be in a form of packets that are sized smaller than a data region of a permanent storage location 113. For instance, the packets may be 512 bytes in length, whereas the data region of each permanent storage location 113 may be 1024 bytes in length, 1536 bytes in length, 2048 bytes in length, etc. Packets received from a sending interface that are of a different size (smaller or larger) than an allocated storage capacity of data storage regions of permanent storage locations 113 are referred to herein as unaligned packets. In such embodiments, certain processing needs to be carried out to properly accommodate the unaligned packets into the permanent storage locations 113. The processing can include first reading whatever data is currently stored on physical destination locations (specific ones of permanent storage locations 113), for the unaligned packets, into cache, modifying the current data with the data in the unaligned packets, and then writing the modified data to the respective specific ones of the permanent storage locations 113. This process is referred to herein as a “read-modify-write” process and is described in detail in connection with
MEM 306 can include random access memory (RAM), read only memory (ROM), and other sources of resident memory for microprocessor 304. Disc drive 300 includes one or more data storage discs 312 that are described in detail further below in connection with
Data is transferred between host computer 302 and disc drive 300 by way of disc drive interface 310, which includes a buffer 318 to facilitate high speed data transfer between host computer 302 and disc drive 300. A substantial portion of a read-modify-write operation within drive 300 may be carried out within buffer 318, which is referred to hereinafter as a second cache memory. In one embodiment, second cache 318 is constructed from solid-state components. While the second cache memory is depicted in
Data to be written to disc drive 300 are passed from host computer 302 to cache 318 and then to a read/write channel 322, which encodes and serializes the data and provides the requisite write current signals to heads 316. To retrieve data that have been previously stored by disc drive 300, read signals are generated by the heads 316 and provided to read/write channel 322. Interface 310 performs read signal decoding, error detection, and error correction operations. Interface 310 then outputs the retrieved data to cache 318 for subsequent transfer to the host computer 302.
As the disc 312 rotates, data head 316 reads the servo information containing an address within the servo bursts 332 and sends the servo information back to servo control system 320. Servo control system 320 checks whether the address in the servo information read from burst sectors 332 corresponds to the desired head location. If the address does not correspond to the desired head location, servo control system 320 adjusts the position of head 316 to the correct track location.
As indicated above, each track 330 includes data wedges 334 containing stored user information. The number of wedge sectors 334 contained on a particular track 330 depends, in part, on the length (i.e. circumference) of the track 330. Besides containing user information, each wedge sector 334 may also include other data to help identify and process the user information.
In accordance with an embodiment, a portion of the disc 312 is reserved for use as a cache memory 336, which is hereinafter referred to as a first cache memory. First cache memory 336 is shown in
Disc drive 300 uses first cache 336 in conjunction with second cache 318 in order to manage data as the data is being transferred to and from its intended destination track 330 on disc 312. Because first cache 336 is located on, for example, magnetic media (i.e., disc 312), first cache 336 generally has a slower access time than second cache 318. However, first cache 336 has the advantage of larger storage and a less expensive cost per unit of storage than second cache 318. As such, in an embodiment, disc drive 300 manages the caching of data using each of the first cache 336 and the second cache 318 based on the access time of each and the available capacity of each.
To illustrate the commission of unaligned packets of data from a sending interface (e.g., the host computer 302) to a data storage disc 312,
The seven data packets included within the stream sent from the sending interface are shown in
The first cache 336 and second cache 318 may take on any structure known to those of skill in the art for caching data, for example, in an embodiment, data packets are stored in these cache memories in cache, or buffer, sectors (not shown). In this embodiment, the sectors of second cache 318 are sized to hold only a single data packet sent from host computer 302. In contrast, sectors of first cache 236 are sized to hold one or more data packets send from host computer 302, by way of second cache 318. Thus, whereas sectors of second cache 318 are operable to hold one packet each, the sectors of first cache 336 are operable to hold “p” data packets each, where “p” is typically an integer number of data packets. These sectors may be of any length, but to illustrate
As indicated earlier, when memory device 300 is idle, for example, a read-modify-write process can be carried out for packets stored in first cache 336. Although the read-modify-write process may involve all, or a substantial portion, of the “n” packets stored in first cache 336, in the interest of simplification, the read-modify-write process is described for only packets N1, N2, N3, N4, N5, N6, and N7.
Track 330 includes a plurality of sectors 340 (shown in
Initially, and in accordance with an exemplary embodiment only, data which had been previously written to disc 312 is stored on the three contiguous sectors 405, 406 and 407, which are each 2048 bytes in length, and thus, operable to each store four standard 512 byte packets issued from a sending interface (e.g., host computer 302) to disc drive 300. As such, first sector 405 initially holds four 512 byte entries of data (e.g., packets): A1, A2, A3, and A4; second sector 406 initially holds four 512 byte entries of data (e.g., packets): B1, B2, B3, and B4; and third sector 407 initially holds four 512 byte entries of data (e.g., packets): C1, C2, C3, and C4. Each of the regions of the sectors 405, 406 and 407 storing these entries of data (A1-A4, B1-B4 and C1-C4, respectively) are shown divided by dashed lines in
For purposes of illustration, indication of destination between any of track 330, first cache 336, and portions 318-1 and 318-2 of second cache 318, is shown with vertical alignment of data. That is, data will be copied or moved up and down vertical columns as it is manipulated. This is for purposes of illustration. One skilled in the art should appreciate that there are well-established mechanisms for recording sector addresses to which cached data is to be written.
Referring to
Disc drive 300 moves packets N1-N7 from first cache 336 to first portion 318-1 of second cache 318. Also, disc drive 300 copies all data entries A1-A4, B1-B4 and C1-C4 from sectors 405, 406 and 407, respectively, and stores the entry copies in second portion 318-2 of second cache 318. The results of these operations, which encompass the “read” aspects of the read-modify-write process, are illustrated in
Next, disc drive 300 updates data entry A4 with data packet A1, updates data packets B1-B4 with data packets N2-N5 and updates data packets C1-C2 with data packets N6-N7. These operations constitute the “modify” aspects of the read-modify-write process. The result of these operations are illustrated in
After modifying the cached data retrieved from sectors 405, 406 and 407, the disc drive 300 proceeds to transfer the modified data from the second portion of second cache 318 to sectors 405, 406 and 407. The result of this particular process, which encompasses the “write” aspects of the read-modify-write process, is shown in
It should be noted that although second cache or buffer 318 is shown as a single memory unit in
The example provided in connection with
As described above, in some embodiments, the first cache may comprise a media cache and the second cache may comprise a non-volatile cache memory. A media cache may, for example, be a cache that resides in a same type of memory, or on a same type of storage medium, that includes the final physical destination locations for data packets that are received in the data storage device. In some exemplary embodiments, the non-volatile cache resides in a different type of memory than the media cache. Examples of a media cache are a cache on a portion of an optical storage medium (for example, an optical storage disc) and a cache on a portion of a magnetic storage medium (for example, a magnetic storage disc), where the optical storage medium and magnetic storage medium include final physical destination locations for user data. Referring to
In accordance with various embodiments, the methods described herein may be implemented as one or more software programs running on one or more computer processors or controllers, such as those included in devices 108, 200 and 300. Dedicated hardware implementations including, but not limited to, application specific integrated circuits, programmable logic arrays and other hardware devices can likewise be constructed to implement the methods described herein.
The illustrations of the embodiments described herein are intended to provide a general understanding of the structure of the various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of apparatus and systems that utilize the structures or methods described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. Additionally, the illustrations are merely representational and may not be drawn to scale. Certain proportions within the illustrations may be exaggerated, while other proportions may be reduced. Accordingly, the disclosure and the figures are to be regarded as illustrative rather than restrictive.
One or more embodiments of the disclosure may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to limit the scope of this application to any particular invention or inventive concept. Moreover, although specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description.
The Abstract of the Disclosure is provided to comply with 37 C.F.R. §1.72(b) and is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, various features may be grouped together or described in a single embodiment for the purpose of streamlining the disclosure. This disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may be directed to less than all of the features of any of the disclosed embodiments.
The above-disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other embodiments, which fall within the true spirit and scope of the present disclosure. Thus, to the maximum extent allowed by law, the scope of the present disclosure is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description.
The present application is a divisional of U.S. application Ser. No. 13/292,169, filed Nov. 9, 2011, which is based on and claims the benefit of U.S. provisional patent application Ser. No. 61/422,544, filed Dec. 13, 2010, the contents of U.S. application Ser. No. 13/292,169 and U.S. provisional patent application Ser. No. 61/422,544 are hereby incorporated by reference their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4899275 | Sachs et al. | Feb 1990 | A |
5261058 | Squires | Nov 1993 | A |
5404487 | Murata | Apr 1995 | A |
5465343 | Henson | Nov 1995 | A |
5475859 | Kamabayashi | Dec 1995 | A |
5488695 | Cutter | Jan 1996 | A |
5584012 | Kojima | Dec 1996 | A |
5586291 | Lasker et al. | Dec 1996 | A |
5701516 | Cheng | Dec 1997 | A |
5701582 | DeBey | Dec 1997 | A |
5724501 | Dewey | Mar 1998 | A |
5742933 | Kojima | Apr 1998 | A |
5852705 | Hanko et al. | Dec 1998 | A |
6243795 | Yang et al. | Jun 2001 | B1 |
6295577 | Anderson et al. | Sep 2001 | B1 |
6516426 | Forehand et al. | Feb 2003 | B1 |
6789141 | Ayukawa | Sep 2004 | B2 |
6826630 | Olds et al. | Nov 2004 | B2 |
6996668 | Gaertner et al. | Feb 2006 | B2 |
7007208 | Hibbert et al. | Feb 2006 | B1 |
7099993 | Keeler | Aug 2006 | B2 |
7296108 | Beukema et al. | Nov 2007 | B2 |
7318121 | Gaertner et al. | Jan 2008 | B2 |
7350046 | Sicola et al. | Mar 2008 | B2 |
7395404 | Gorobets et al. | Jul 2008 | B2 |
7688753 | Zimran | Mar 2010 | B1 |
7966450 | Klein | Jun 2011 | B2 |
8015433 | Chu et al. | Sep 2011 | B2 |
8171219 | Trika et al. | May 2012 | B2 |
8296504 | Chu et al. | Oct 2012 | B2 |
8301833 | Chen | Oct 2012 | B1 |
8332582 | Nakamura | Dec 2012 | B2 |
8402210 | Mannen et al. | Mar 2013 | B2 |
8412884 | Ide et al. | Apr 2013 | B1 |
8463984 | Olds | Jun 2013 | B2 |
8621144 | Eschmann et al. | Dec 2013 | B2 |
9280477 | Friendshuh | Mar 2016 | B2 |
20020029354 | Forehand et al. | Mar 2002 | A1 |
20030177290 | Ayukawa | Sep 2003 | A1 |
20030229757 | Hosoya | Dec 2003 | A1 |
20040064497 | Debey | Apr 2004 | A1 |
20040105351 | Ueki | Jun 2004 | A1 |
20040174631 | Tanaka et al. | Sep 2004 | A1 |
20050055517 | Olds et al. | Mar 2005 | A1 |
20050066121 | Keeler | Mar 2005 | A1 |
20050240792 | Sicola et al. | Oct 2005 | A1 |
20060075202 | Gaertner et al. | Apr 2006 | A1 |
20060248387 | Nicholson et al. | Nov 2006 | A1 |
20060253650 | Forrer, Jr. | Nov 2006 | A1 |
20060271721 | Beukema et al. | Nov 2006 | A1 |
20070150693 | Kaneko et al. | Jun 2007 | A1 |
20080040540 | Cavallo | Feb 2008 | A1 |
20080065845 | Montero et al. | Mar 2008 | A1 |
20080109602 | Ananthamurthy et al. | May 2008 | A1 |
20080147970 | Sade et al. | May 2008 | A1 |
20090161569 | Corlett | Jun 2009 | A1 |
20090198888 | Mannen et al. | Aug 2009 | A1 |
20090199217 | McBrearty et al. | Aug 2009 | A1 |
20100211859 | Garcia et al. | Aug 2010 | A1 |
20100332717 | Maeda et al. | Dec 2010 | A1 |
20110161557 | Haines et al. | Jun 2011 | A1 |
20110320687 | Belluomini et al. | Dec 2011 | A1 |
20120151134 | Friendshuh | Jun 2012 | A1 |
20120221879 | Hutchison et al. | Aug 2012 | A1 |
20120284561 | Wilson | Nov 2012 | A1 |
20120303872 | Benhase et al. | Nov 2012 | A1 |
20130260687 | Paycher et al. | Oct 2013 | A1 |
20140372679 | Flynn | Dec 2014 | A1 |
20150309742 | Amidi | Oct 2015 | A1 |
Number | Date | Country |
---|---|---|
0 573 307 | Dec 1993 | EP |
Entry |
---|
Highly Functional Memory Architecture for Large-Scale Data Applications; Tanaka et al; Innovative Architecture for Future Generation High-Performance Processors and Systems; Jan. 12-14, 2004 (10 pages). |
Prosecution history from U.S. Appl. No. 13/292,169, filed Nov. 9, 2011, including: Notice of Allowance dated Oct. 26, 2015 (12 pages); Advisory Action dated Sep. 22, 2015 (5 pages); Final Rejection dated Jul. 10, 2015 (28 pages); Non-Final Rejection dated Mar. 11, 2015 (25 pages); Advisory Action dated Apr. 3, 2014 (4 pages); Final Rejection dated Jan. 15, 2014 (27 pages); and Requirement for Restriction/Election dated Sep. 26, 2013; 105 pages total. |
Write Caching, Charles M. Kozierok, Apr. 17, 2001, retrieved from http://www.pcguide.com/ref/hdd/op/cacheWrite-c.html on Jan. 11, 2014 (2 pages); dated Jan. 15, 2014 in corresponding U.S. Appl. No. 13/292,169. |
Momentus 5400 PSD, Seagate, copyright 2007, retrieved from http://www.seagate.com/docs/pdf/datasheet/disc/ds_momentus_5400_psd.pdf on Aug. 14, 2013 (2 pages); dated Jan. 15, 2014 in corresponding U.S. Appl. No. 13/292,169. |
22 File Caching, Spring 2000, retrieved from www.mpi-sws.org/˜druschel/courses/os/lectures/fs-caching.ps.gz? Jan. 11, 2014 (2 pages); dated Jan. 15, 2014 in corresponding U.S. Appl. No. 13/292,169. |
Computer Architecture: A Quantitative Approach (Third Edition), Hennessy et al., May 31, 2002, ISBM-10 1558605957, ISBN-13 978-1558605961, p. 683 (1 page); dated Jan. 15, 2014 in corresponding U.S. Appl. No. 13/292,169. |
Data Storage—Logical Block Addressing (LBA), Nico Gerard, Sep. 21, 2010, retrieved from http://gerardnico.com/wiki/data_storage/lba on Jan. 11, 2014 (2 pages); dated Jan. 15, 2014 in corresponding U.S. Appl. No. 13/292,169. |
What is disk cache, Webopedia, retrieved from http://www.webopedia.com/TERM/disk_cache.html on Jan. 11, 2014 (3 pages); dated Jan. 15, 2014 in corresponding U.S. Appl. No. 13/292,169. |
Product Manual Momentus 5400 FDE, Seagate, Aug. 2007, retrieved from http://www.seagate.com/staticfiles/support/disc/manuals/notebook/momentus/5400.2/PATA/100377037c.pdf on Jan. 11, 2014 (56 pages); dated Mar. 11, 2015 in corresponding U.S. Appl. No. 13/292,169. |
WOW: Wise Ordering for Writes—Combining Spatial and Temporal Locality in Non-Volatile Caches, Gill et al., FAST'05 Proceedings of the 4th conference on USENIX Conference on File and Storage Technologies—vol. 4, 2005, retrieved from https://www.usenix.org/legacy/event/fast05/tech/full_papers/gill/gill.pdf on Feb. 25, 2015 (14 pages); dated Apr. 23, 2013, Mar. 11, 2015, Jul. 10, 2015 in corresponding U.S. Appl. No. 13/292,169. |
The Unix and Internet Fundamentals HOWTO, Feb. 12, 2008, retrieved from https://web.archive.org/web/20080212162207/http://tldp.org/HOWTO/Unix-and-Internet-Fundamentals-HOWTO/memory-management.html on Feb. 25, 2015 (2 pages); dated Mar. 11, 2015 and Jul. 10, 2015 in corresponding U.S. Appl. No. 13/292,169. |
Number | Date | Country | |
---|---|---|---|
20160147480 A1 | May 2016 | US |
Number | Date | Country | |
---|---|---|---|
61422544 | Dec 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13292169 | Nov 2011 | US |
Child | 15013343 | US |