This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2009-171373, filed Jul. 22, 2009, the entire contents of which are incorporated herein by reference.
Embodiments described herein relate generally to a cache memory control method, and an information storage device comprising a cache memory.
Various information storage devices are developed, such as a magnetic disk device (hard disk drive: HDD) comprising a cache memory to increase access speed. The cache memory is a high-speed buffer for temporarily retaining data input/output between a host computer or the like and the information storage device. Part of a copy of data on the information storage device is stored in the cache memory. As this cache memory, a high-speed semiconductor memory such as a static RAM (SRAM) or a dynamic RAM (DRAM) is generally used.
Recently, high-capacity HDDs have been increasingly supplied at low cost, and HDDs in the several hundred gigabyte class or the terabyte class are used in, for example, AV personal computers, digital televisions and digital video recorders. A relatively high-capacity cache memory is used in such a high-capacity HDD.
Various improvements have been proposed for write control of the cache memory. In one example, a write cache is divided into n cache blocks, and cache directories are provided for the respective blocks, and each of the directories is provided with a disk address recording section, an offset information recording section and a data length recording section (see Jpn. Pat. Appin. KOKAI Publication No. 5-314008). Here, the offset information recording section indicates the distance from a head address on a disk (recording medium) to an address on the disk where valid data is to be written. When data in the cache block is stored in the disk, writing is started at an address away from an address indicated by the disk address recording section as far as the number of sectors indicated by the offset information recording section, and writing operation is continued as long as the number of sectors indicated by the data length recording section.
In another example, a cache memory is divided into N cells, and data to be read from or written into a disk is written into the cache memory from a position corresponding to the remainder obtained when an address on the disk is divided by a predetermined value N (see Jpn. Pat. Appin. KOKAI Publication No. 2003-330796).
In the information storage device comprising the cache memory, a write command and data are once written (write cache) into the cache memory when a write access request is issued to the information storage device from the host computer or the like. In this case, simply, the write command and the data have only to be continuously written into the cache memory (by simply incrementing the address of the cache memory). However, in such a simple method, a huge cache memory needs to be managed sector by sector, so that the amount of decoding is increased, and the management is extremely complicated.
There are methods of decreasing the amount of decoding, wherein a cache memory is managed by dividing the cache memory into particular units (segments such as blocks or cells) (Jpn. Pat. Appin. KOKAI Publication No. 5-314008 or Jpn. Pat. Appin. KOKAI Publication No. 2003-330796). For example, given that one sector has 512 bytes and the unit of a segment is 4 kilobytes (4 kB), the unit of one segment allows for eight sectors, so that the amount of decoding is ⅛. However, even in this case, address information (e.g., the head position and length of information to be written) has to be retained per write command. Moreover, if writing is performed starting from the head of the segment, remaining parts of the segment result in unusable wasteful regions in the case where the writing is completed in the middle of the segment.
Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents. In the following description the term “unit” is used to have meaning of a “unit or module”.
In general, according to one embodiment of a cache memory control method of the invention, a data write position in a segment of a cache memory is changed to an address to which a lower bit of a logical block address (LBA) of write data is added as an offset, in order to solve the problem of the remaining parts of the segment turning into wasteful regions.
When the invention is put into practice, the segments of the cache memory can be used without waste. That is, the problem of the remaining parts of the segment turning into wasteful regions can be solved in the case where the writing is completed in the middle of the segment of the cache memory.
Various embodiments of the invention will hereinafter be described with reference to the drawings.
The operation of reading from or writing into the media drive 110 is performed via the cache memory unit 100. In response to an instruction from the host computer 10, the cache memory unit 100 writes the write data from the host computer 10 into the media drive 110, or transfers the read data from the media drive 110 to the host computer 10.
Specifically, the cache memory unit 100 comprises a cache memory 106, a data transfer controller 104 for transferring the write data from the host computer 10 to the cache memory 106 or transferring the read data from the cache memory 106 to the host computer 10, and a cache controller 102 for controlling the operation of the data transfer controller 104 and the operation of the cache memory 106. Here, a storage area of the cache memory 106 is divided into a plurality of segments of a predetermined size, and a segment management table 102a into which information for managing the segments is written is connected to the cache controller 102.
In response to an instruction (e.g., a write command or a read command) from the host computer 10, the cache controller 102 performs control to write the write data from the host computer 10 into the media drive 110, or performs control to send the read data from the media drive 110 back to the host computer 10. In this case, if the cache memory 106 has the same data as the data stored in the media drive 110 to be read by the host computer 10 (cache hit), a copy of the data to be read is transferred from the cache memory 106 to the host computer 10 at high speed. The function of the cache controller 102 is obtained by a firmware that uses a hardware logic circuit or a microcomputer.
Here, high-speed processing can be easily performed when the cache controller 102 is configured by the hardware logic circuit. On the other hand, when the cache controller 102 is embodied by the firmware, the speed of processing is lower than in the case of the hardware logic circuit, but the contents of cache control processing are more easily changed.
In summary, the unit in
When the write data is written into the cache memory 106 divided in the predetermined size (e.g., 1 kB, 2 kB, 4 kB, 8 kB, 16 kB, 32 kB, 64 kB, 128 kB or 256 kB), the cache controller 102 in
When recording information in the media drive 110, the host computer 10 in
At the time of this determination, if there is a unused segment (segment in which no data is written) in the cache memory 106, this unused segment is first used to write the write data. When there is no unused segment in the cache memory 106, one or more segments having old write data are used in chronological order (or in ascending order of the number of cache hits during reading). If there are initially unused segments in the cache memory 106 but there remain no more unused segments during cache write, then one or more segments having old write data are used in chronological order (or in ascending order of the number of cache hits).
In addition, instead of simply using the segments having old write data first, the priorities of the segments to be used can be weighted in the cache controller 102 from the beginning.
When one or more segments to be used to write the write data are determined, write flags are set in these segments, and a write start address SA and a write end address EA (corresponding to the access range of the write data) are set, and then a segment size corresponding to the write data is set (ST12). Set information corresponding to these settings is stored in the segment management table 102a as illustrated in
For example, 2 kB to 16 kB are set as segment sizes if the write data is text data or static image data, or 16 kB to 64 kB are set as segment sizes if the write data is moving image data. Moreover, 500 segments are used if, for example, the segment size is 16 kB and 8 MB of write data is to be cached.
In addition, information on the kind of write data (text data, static image data or moving image data) and/or on the bit rate of the write data (e.g., 2.2 Mbps, 4.6 Mbps, 16 Mbps or 24 Mbps in the case of the moving image data) can be included in the command sent from the host computer 10 to the cache controller 102.
Furthermore, offset data (see
Although not shown, setting information for each segment to be stored in the segment management table 102a can properly include a flag for indicating whether data has been written in the segment, a flag for indicating whether the segment has any free space, a time stamp indicating the time when data has been written into the segment last, and information on, for example, the number of cache hits in reading of the data written in the segment.
When the setting of information in ST12 is finished for one or more segments to be used to write the write data, whether the current writing position of the write data continues after the previous writing is checked (ST14). For example, in
On the other hand, in
Here, the position of the address Ax in the cache memory 106 is offset from the end of the segment to which the data end at the head of the write a1belongs. The lower bit of the LBA of the head data of the write a1 is used to indicate the offset position (see
The device in
Otherwise, the device in
Here, the device in
(01) For example, in the illustration in
(02) In the illustration in
(11) In the cache memory 106, when a write command is issued to a part before or after the LBA at which data has already been written, any segment in the cache memory 106 can be used without leaving a space, thus waste of cache areas can be eliminated. In other words, if a write is generated in an area that connects with the currently registered LBA, the new write is located to continue on the cache, so that wasteful regions can be reduced or removed. Moreover, continuous data in the LBA can be continuously arranged within the cache memory.
(12) Furthermore, link information (not shown) needed when cache data is scattered in the cache memory 106 can be put together, so that the amount of information needed in cache management can be reduced. That is, the cache memory can be easily managed, and at the same time, can be efficiently used without waste.
EXAMPLE OF CORRESPONDENCE BETWEEN EMBODIMENT AND INVENTION
(a) In the method of controlling the cache memory 106 divided into segments of a predetermined size, the lower address of the logical block address (LBA) of the write data is used as an address offset in the segment when the write data from the host 10 is written into the cache memory (ST20). That is, the cache memory is managed in particular units (segments), and when the write data from the host is written into the cache, the lower address of the LBA is used as an offset address in the segment.
(b) When a write is generated in a part before the logical block address (LBA) where data has been previously written (ST14 NO), data that is about to be written is arranged in the cache memory so that the end of this data is located immediately before the previously written data (ST20). That is, if a write is generated in a part before the LBA where data has been previously written, data that is about to be written is arranged in the cache memory so that the end of this data is located immediately before the previously written data. In this case, since the LBA is used as the offset address, the data that is about to be written is arranged without waste so that no free space is produced in the segment.
(c) When a write is generated in a part after the logical block address (LBA) where data has been previously written (ST14 YES), data that is about to be written is arranged in the cache memory so that the head of this data is located immediately after the previously written data (ST16). That is, if a write is generated in a part after the LBA where data has been previously written, data that is about to be written is arranged in the cache memory so that the head of this data is located immediately after the previously written data. In this case as well, the data that is about to be written is arranged without waste so that no free space is produced in the segment.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2009-171373 | Jul 2009 | JP | national |