Referring initially to
The HDD controller 12 controls a read/write mechanism 16 that includes one or more heads for writing data onto one or more disks 18. Non-limiting implementations of the HDD 10 include plural heads and plural disks 18, and each head is associated with a respective read element for, among other things, reading data on the disks 18 and a respective write element for writing data onto the disks 18.
The HDD controller 12 communicates with solid state memory. One such solid state memory may be volatile memory such as a Dynamic Random Access Memory (DRAM) device 20. Also, the controller 12 may communicate with solid state non-volatile memory, preferably a flash memory device 22, over an internal HDD bus 24. The HDD controller 12 also communicates with an external host computer 25 through a host interface module 26 in accordance with HDD principles known in the art. The host computer 25 can be a portable computer that can be powered by a battery, so that the HDD 10 can be a mobile HDD. The controller 12 with, e.g., DRAM 20 may be mounted on a HDD motherboard in accordance with principles known in the art.
As stated above, the logic disclosed below may be contained in a code storage 14 that is separate from the HDD controller 12, or the storage 14 may be integrated into the controller 12. Or, it may be contained in the read/write mechanism 16, or on the DRAM 20 or flash memory device 22. The logic may be distributed through the components mentioned above, and may be implemented in hardware logic circuits and/or software logic circuits.
Now referring to
Block 32 indicates that before outputting commands from the cache for execution to disk, the cache is filled to the desired cache size “N”. When the desired cache size “N” is reached, i.e., when the cache stores the desired number of commands, at block 34 substantially all “N” commands in the cache are evaluated using an execution optimization algorithm such as a greedy algorithm or a n-RPO algorithm, including expected NRPO algorithms.
Block 36 indicates that of the “N” commands evaluated by the algorithm, only the “n” commands best fitting the criteria that were used to establish the optimal subset size “n” are executed to disk. These “n” commands may then be removed from the cache, but the remaining “N”-“n” commands remain in cache. At block 38 the cache is refilled to the desired number “N” of commands prior to once again using an execution optimization algorithm to identify the “n” commands in the next successive optimal subset.
The cache may be implemented in, e.g., the DRAM 20 or other solid state memory, or it may be implemented on a set-aside portion of the disk.
The above strategies may be combined. For instance, if it is determined that the optimal subset size will be the greatest number of commands that can be executed to disk in four disk revolutions, then the optimization algorithm will output, as its “top twenty” commands, those fitting the selection criteria. This can be modified by requiring that any command in cache that has been there longer than a predetermined period of time must be included in the next execution batch, potentially bumping one of the commands that would otherwise be in the optimal subset back into the queue for the next processing cycle.
The “n” commands in the subset and only those commands may be output as a group by the optimization algorithm, or all “N” commands may be ordered and output by the optimization algorithm, in which case only the top “n” commands are executed. The remaining commands are evaluated once again in the next cycle, i.e., with the new “n” commands that have been added to bring the cache size back up to “N”.
While the particular SYSTEM AND METHOD FOR INCREMENTAL RPO-TYPE ALGORITHM IN DISK DRIVE as herein shown and described in detail is fully capable of attaining the above-described objects of the invention, it is to be understood that it is the presently preferred embodiment of the present invention and is thus representative of the subject matter which is broadly contemplated by the present invention, that the scope of the present invention fully encompasses other embodiments which may become obvious to those skilled in the art, and that the scope of the present invention is accordingly to be limited by nothing other than the appended claims, in which reference to an element in the singular is not intended to mean “one and only one” unless explicitly so stated, but rather “one or more”. It is not necessary for a device or method to address each and every problem sought to be solved by the present invention, for it to be encompassed by the present claims. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claims. Absent express definitions herein, claim terms are to be given all ordinary and accustomed meanings that are not irreconcilable with the present specification and file history.