This application is based upon and claims the benefit of priority from prior Japanese Patent Application No. 2010-290995, filed Dec. 27, 2010, the entire contents of which are incorporated herein by reference.
Embodiments described herein relate generally to a magnetic disk drive and a method of accessing a disk in the drive.
A host using a magnetic disk drive generally specifies an access destination with a logical address when accessing the magnetic disk drive. Suppose consecutive logical addresses have been allocated to, for example, consecutive tracks in a first area on a disk. In this state, suppose the host has requested the magnetic disk drive to rewrite data in a second area, a part of the first area (more specifically, in a logical address area corresponding to the second area).
A conventional magnetic disk drive that uses the following method to rewrite data has been known. The method is to write new data into a third area differing from the second area on the disk instead of rewriting data itself stored in the second area.
It is assumed that new data has been written in the third area by the above method. In this case, the allocation destination of logical addresses allocated to the second area is changed from the second area to the third area. Then, the data in the second area is invalidated. That is, the mapping of logical addresses and physical addresses is changed.
In this state, suppose the host has requested access to a logical address area allocated to the first area before the data in the second area was invalidated. In this case, when an access destination has reached the second area in the first area, the access is changed to access to the third area.
With the conventional magnetic disk drive, when the data has been rewritten repeatedly, tracks on the disk to which consecutive logical addresses are allocated (more specifically, physical addresses indicating the physical locations of tracks) become physically nonconsecutive. Therefore, with the conventional magnetic disk drive, nonconsecutive physical locations on the disk are accessed frequently. To access the nonconsecutive physical locations, a seek operation for moving the head to the nonconsecutive physical locations is needed. However, depending on the purpose of disk access, the disk may be accessed based on the correspondence between logical addresses and physical addresses before the change of the mapping.
A general architecture that implements the various features of the embodiments will now be described with reference to the drawings. The drawings and the associated descriptions are provided to illustrate the embodiments and not to limit the scope of the invention.
Various embodiments will be described hereinafter with reference to the accompanying drawings. In general, according to one embodiment, a magnetic disk drive comprises (includes) a disk, a determination module, and a controller. The determination module is configured to determine whether access to the disk requires data transfer between a host and the magnetic disk drive in accessing the disk. The controller is configured to control disk access according to a predetermined allocation of consecutive second logical addresses corresponding to physical addresses indicative of consecutive physical locations on the disk if the data transfer is not required. The second logical addresses are addresses different from first logical addresses recognized by the host.
In the embodiment, the HDD 10 uses a known shingled write technique. The HDD 10 comprises disks (magnetic disks) 11-0 and 11-1, heads (magnetic heads) 12-0 to 12-3, a spindle motor (SPM) 13, an actuator 14, a voice coil motor (VCM) 15, a driver IC 16, a head IC 17, and a system LSI 18.
The disks 11-0 and 11-1, which are magnetic recording mediums, are stacked one on top of the other with a specific clearance between them. Each of the disks 11-0 and 11-1 has an upper disk surface and a lower disk surface. In the embodiment, each of the disk surfaces makes a recording surface on which data is to be recorded magnetically. The disks 11-0 and 11-1 are rotated by the SPM 13 at high speed. The SPM 13 is driven by a driving current (or driving voltage) supplied from the driver IC 16. The HDD 10 may comprise a single disk.
Each of zones Z0 and Z1 is divided into a plurality of areas called roofs for management. In an example of
In the embodiment, at least one of areas A0 to A2 in zone Zp, for example, one, is used as a spare area. In the shingled writing, the spare area is used as a move destination (rewrite destination) of data on each track in another area in a corresponding zone Zp. When the data movement (rewrite) has been completed, the source area of the data is newly changed to a spare area.
In
Heads 12-0 to 12-3 are attached to the tip of the actuator 14. More specifically, heads 12-0 to 12-3 are attached to the tips of suspensions extending from four arms the actuator 14 has. The actuator 14 is supported so as to angularly move around an axis 140. The actuator 14 includes the VCM 15. The VCM 15 is used as a driving source for the actuator 14. The VCM 15 is driven by a driving current (or driving voltage) supplied from the driver IC 16, thereby angularly moving the actuator 14 around the axis 140. This makes heads 12-0 to 12-3 move in the direction of radius of disks 11-0 and 11-1.
The driver IC 16 drives the SPM 13 and VCM 15 under the control of a CPU 186 (described later) in the system LSI 18. The head IC 17 amplifies a signal (read signal) read by head 12-j (j=0, 1, 2, 3). The head IC 17 also converts write data transferred from an R/W channel (described later) 181 in the system LSI 18 into a write current and outputs the write current to head 12-j.
The system LSI 18 is an LSI called System-on-Chip (SOC) in which a plurality of elements have been squeezed into a single chip. The system LSI 18 comprises a read/write channel (R/W channel) 181, a disk controller (hereinafter, referred to as a HDC) 182, a buffer RAM 183, a flash memory 184, a program ROM 185, a CPU 186, and a RAM 187.
The R/W channel 181 is a known signal processing device configured to process signals related to read/write operations. The R/W channel 181 digitizes a read signal and decodes read data from the digitized data. The R/W channel 181 also extracts servo data necessary to position head 12-j from the digital data. The R/W channel 181 also encodes write data.
The HDC 182 is connected to the host 100 via a host interface 110. The HDC 182 receives a command (e.g., write command or read command) transferred from the host 100. The HDC 182 controls data transfer between the host 100 and the HDC 182. The HDC 182 controls data transfer between disk 11-i and the HDC 182
The buffer RAM 183 includes a buffer area that temporarily stores data to be written onto disk 11-i and data read from disk 11-i via the head IC 17 and R/W channel 181. To speed up table reference at the time of turning on the power of the HDD 10, the buffer RAM 183 further includes a table area into which a mapping table 184a and a PDM table 184b (both described later) are to be loaded from a flash memory 184. However, in the explanation below, for the sake of simplification, suppose the mapping table 184a and PDM table 184b are referred to in a state where they have been stored in the flash memory 184.
The flash memory 184 is a rewritable nonvolatile memory. The flash memory 184 is used to store the mapping table 184a and primary defect management (PDM) table 184b. The mapping table 184a and PDM table 184b will be described later. The program ROM 185 stores a control program (firmware program) in advance. The control program may be stored in a part of the flash memory 184.
The CPU 186 functions as a main controller of the HDD 10. The CPU 186 controls at least a part of the rest of the HDD 10 according to the control program stored in the program ROM 185. A part of the RAM 187 is used as a work area of the CPU 186.
Next, the principle of shingled writing applied to the embodiment will be explained with reference to
Information indicating the relationship between logical addresses and physical addresses is stored in the mapping table 184a. Referring to the mapping table 184a, the CPU 186 can determine tracks N, N+1, N+2, and N+3 with physical addresses N, N+1, N+2, and N+3 respectively to which logical addresses n, n+1, n+2, and n+3 have been allocated respectively. Here, to simplify explanation, suppose a logical address has been allocated to each track. However, it is common practice to allocate a logical address (LBA) to each sector on a track. A physical address of each track is composed of cylinder number C and head number H. A physical address of each sector on a track is composed of cylinder number C, head number H, and sector number S.
In the embodiment, the physical address of each track on the upper disk surface of disk 11-0 corresponding to head 12-0 includes head number 0 (H=0) and the physical address of each track on the lower disk surface of disk 11-0 corresponding to head 12-1 includes head number 1 (H=1). Similarly, the physical address of each track on the upper disk surface of disk 11-1 corresponding to head 12-2 includes head number 2 (H=2) and the physical address of each track on the lower disk surface of disk 11-1 corresponding to head 12-3 includes head number 3 (H=3).
As is commonly known, the track width is narrower than the head width in shingled writing. To simplify explanation, suppose the track width is half the head width. In this case, for example, to write data in logical address n+1 onto track N+1 after data in logical address n has been written onto track N, head 12-j is shifted toward track N+1 by half the head width. After this shift, head 12-j writes data in logical address n+1 onto track N+1. Then, similarly, data in logical address n+2 is written onto track N+2. Thereafter, data in logical address n+3 is written onto track N+3.
As described above, data is written onto tracks N, N+1, N+2, and N+3 by so-called partial overwriting. Therefore, if data (e.g., A) on track N+2 to which logical address n+2 has been allocated is rewritten with data (e.g., B) requested by the host 100 this time, data on, for example, track N+3 next to track N+2 is also rewritten.
Therefore, in the HDD 10 using shingled writing, data B is written on a track differing from track N+2 instead of rewriting data A on track N+2 with data B. In an example of
Thereafter, the allocation destination of logical address n+2 is changed from track N+2 to track N+4 (i.e., track N+4 on which data B has been written). The state of tracks N, N+1, N+2, . . . , N+7 at this time is shown in
Suppose, after the state of
Suppose, in the state of
Here, suppose the host 100 has requested the HDD 10 to read data in consecutive logical addresses n, n+1, n+2, n+3, and n+4. In this case, although the logical addresses are consecutive, they are not accessed sequentially on disk 11-i. That is, in addition to a seek operation for moving head 12-j to the beginning track N, a seek operation for moving head 12-j from track N to track N+5, and a seek operation for moving head 12-j from track N+6 to track N+3 take place. Therefore, it takes time to read data. Moreover, as is commonly known, there is a skew in each track. Accordingly, when access is not provided sequentially, there is a rotational delay after a seek operation, with the result that it takes much more time to read data.
In the commands (requests) given from the host 100 to the HDD 10, there is a command to specify an operation that does not require data to be transferred to the host 100 as in a scan test for checking, for example, a predetermined logical address area. To simplify the explanation, suppose the predetermined logical address area is a logical address area specified by logical addresses n to n+4.
A self test in Self-Monitoring Analysis and Reporting Technology (SMART) that scans the entire disk surface of a disk to check the disk surface is known as a command requiring a scan test. In a self test in SMART, the HDD 100 has to inform the host 100 in advance of the time required for a scan test. When physical addresses corresponding to logical addresses n to n+4 are nonconsecutive as described above (see
In the case of a command which specifies such an operation as a scan test that does not require the HDD 10 to transfer data to the host 100 (an operation involving disk access), corresponding tracks need not necessarily be accessed in the order of logical addresses n to n+4. Therefore, it is conceivable that a scan test is executed in the order of physical addresses as follows: tracks N, N+1, N+2, . . . . Here, tracks with defect sectors (primary defect sectors) detected in manufacturing, for example, the HDD 10 might be included in tracks N, N+1, N+2, . . . . If a scan test is executed simply in the order of physical addresses, tracks with primary defect sectors are also accessed. In this case, an error occurs. Therefore, in a scan test, disk access that takes primary defect sectors (i.e., primary defect places) into account has to be applied.
In the embodiment, primary defect sectors are managed using the PDM table 184b. In the PDM table 184b, primary defect sectors are managed by the allocation of default logical addresses for physical addresses (hereinafter, referred to as a default address arrangement) before the HDD 10 performed control for shingled writing for the first time.
A default address arrangement in the embodiment will be explained with reference to
Next, logical addresses are allocated in ascending order in the sector direction on track [1, 0] with cylinder 1 and head 0. Cylinder number C is incremented repeatedly until incremented cylinder number C has reached the last cylinder in the corresponding zone. For convenience of drawing,
With head 0, when logical addresses have been allocated up to the last sector in the last cylinder (cylinder 3) in zone 0, head number H is incremented from 0 to 1. Then, logical addresses are allocated in ascending order in the sector direction on track [0, 1] with cylinder 0 and head 1. Hereinafter, with head 1, logical addresses are allocated in the same manner as with head 0.
With head 1, when logical addresses have been allocated up to the last sector in the last cylinder in zone 0, head number H is incremented from 1 to 2. Then, logical addresses are allocated in ascending order in the sector direction on track [0, 2] with cylinder 0 and head 2. Hereinafter, with head 2, logical addresses are allocated in the same manner as with head 0. In this way, logical addresses are allocated repeatedly until the head whose head number H is the largest, that is, head 3, has been reached in zone 0.
With head 3, when logical addresses have been allocated up to the last sector in the last cylinder in zone 0, zone number Z is incremented from 0 to 1. Then, logical addresses are allocated in the same manner as in zone 0. As described later, in zone 1, too, logical addresses are allocated in ascending order, beginning with LBA0. Logical addresses may be allocated to sectors in zone 1, beginning with a logical address next to the logical address allocated to the last sector in the last cylinder in zone 0. The aforementioned default address arrangement is predetermined by a control program stored in the program ROM 185.
The reason why the default address arrangement is applied will be explained. In the explanation with reference to
Actually, for example, if track N+2 belongs to area (roof) A0 in zone Z0, the mapping of logical addresses and physical addresses is changed in connection with the logical addresses allocated to all the tracks in area A0. In addition, data A on track N+2 is rewritten with data B as follows. For example, data in all the tracks in area A0 including track N+2 is read sequentially. Of the read data, data A corresponding to track N+2 is replaced with data B. That is, data on all the tracks in area A0 are merged with data B. The merged data (update data) is written (or moved) into a spare area in zone Z0 sequentially by shingled writing. The spare area is assumed to be area A2 in zone Z0. When the merged data has been written into area A2 as the spare area and the mapping has been changed, the spare area is changed from area A2 to area A0. That is, area A0 is used as a new spare area.
As described above, with the HDD 10 using shingled writing, even if data in a track Tr is rewritten, data in all the tracks in area Aq (q being any one of 0 to 2) including the track Tr have been rewritten. In addition, update data is written into a spare area in a zone to which area Aq belongs. That is, with the HDD 10 using shingled writing, data on track Tr is rewritten only in a zone to which the track Tr belongs. The reason for this is that, if the zone is changed, the recording capacity of a track differs and data cannot be rewritten in units of tracks. Therefore, in the embodiment, the concept of zone is important.
In the default logical address arrangement, after logical addresses (LBA) have been allocated in ascending order in the direction in which the cylinder number increases (i.e., in the cylinder direction) in a zone, the head number is incremented. The reason for this is as follows. Firstly, it is common practice for the host 100 (user) to use logical addresses sequentially, starting with the smallest one. Secondly, the transfer rate is higher in a zone closer to the outer edge of disk 11-i. Thirdly, data (data access) has to be prevented from concentrating on a specific head. Taking these into account, the HDD 10 of the embodiment using shingled writing employs the default address arrangement (i.e., default logical address allocation). The PDM table 184b is managed based on the default address arrangement.
On the other hand, when the host 100 has specified such an operation as a scan test that does not require data transfer between the host 100 and the HDD 10, disk access need not necessarily be provided according to logical addresses reallocated by shingled writing. Therefore, in the embodiment, the CPU 186 controls such an operation as a scan test based on the default address arrangement.
The default address arrangement remains unchanged even if logical addresses (LBA) are reallocated. A logical address applied to the default address arrangement is a logical address (a first logical address) used for management valid only in the HDD 10. The logical address (LBA) used for management is called a management logical address (M-LBA). The management logical address (M-LBA) is not recognized by the host 10. In contrast, for example, a logical address (a second logical address) specified by a read/write command from the host 100, that is, a logical address (LBA) recognized by the host 100, is called a host logical address (H-LBA).
As described above, the PDM table 184b manages primary defect sectors zone by zone using management logical addresses (M-LBAs). One reason for this is that disk 11-i is accessed in units of zones in shingled writing. Another reason is that an area to be referred to in the PDM table 184b can be determined at high speed based on a zone to be accessed.
Next, an operation in the embodiment will be explained with reference to
A command given to the HDD 10 by the host 100 is received by the HDC 182 of the HDD 10 (block 901). Then, the CPU 186, which functions as a determination module, determines whether the command received by the HDC 182 is a command that needs data transfer between the host 100 and the HDD 10 (block 902).
If the command is a command that needs data transfer (Yes in block 902), the CPU 186 functions as an address translator and converts consecutive host logical addresses (H-LBAs) in a logical address area specified by the command (a host logical address area) into corresponding physical addresses (block 903). The mapping table 184 is used in this conversion.
Next, the CPU 186 controls disk access specified by the host 100 based on physical addresses corresponding to the host logical addresses (H-LBAs) (block 904). Here, the physical addresses corresponding to the consecutive host logical addresses (H-LBAs) may be nonconsecutive as a result of the repetition of shingled writing.
As described above, in the case of disk access that requires data transfer between the host 100 and the HDD 10, the CPU 186 selects disk access according to host logical addresses (H-LBAs). That is, the CPU 186 functions as a disk access selector according to the result of the determination in block 902 and selects disk access according to host logical addresses (H-LBAs).
On the other hand, if the command is a command that does not require data transfer (No in block 902), the CPU 186 controls disk access according to the default address arrangement (block 905). That is, the CPU 186 controls disk access according to the predetermined allocation of management logical addresses (M-LBAs) to physical addresses. This causes disk access requiring no data transfer between the host 100 and HDD 10 to be provided zone by zone in the order of M-LBAs in the default address arrangement.
As described above, in the case of disk access that does not require data transfer between the host 100 and the HDD 10, the CPU 186 selects disk access that follows the allocation of management logical addresses (M-LBAs) to physical addresses. That is, the CPU 186 functions as a disk access selector according to the result of the determination in block 902 and selects disk access that follows the predetermined allocation of management logical addresses (M-LBAs) to physical addresses.
In the embodiment, physical addresses (sectors in physical addresses) to which management logical addresses (M-LBAs) are allocated in ascending order are arranged sequentially for each of head 0 to head 3 (that is, each disk surface of disks 11-0 and 11-1) as explained with reference to
Here, suppose disk access that requires no data transfer is disk access for a known scan test in SMART. In this case, as seen from
In block 905, the CPU 186 refers to an area corresponding to a zone to be processed at present in the PDM table 184b of
M-LBA=000, M-LBA=100, and M-LBA=101 are managed as primary defects in zone 0 in the PDM table 184b of
This enables an error caused by access to a primary defect sector to be prevented in a scan test applied to the embodiment. That is, primary defect sectors can be processed properly. In the embodiment, a scan test is executed by request of the host 100. However, the scan test may be executed automatically in the HDD 10. According to at least one embodiment explained above, it is possible to provide a magnetic disk drive and a magnetic disk access method which are capable of preventing nonconsecutive physical locations on a disk from being accessed frequently in disk access that does not require data transfer between the host and the drive.
The various modules of the systems described herein can be implemented as software applications, hardware and/or software modules, or components on one or more computers, such as servers. While the various modules are illustrated separately, they may share some or all of the same underlying logic or code.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2010-290995 | Dec 2010 | JP | national |