The present inventive concepts relate to storage device input/output (I/O) techniques, and more particularly, to a system and method for improving storage device I/O performance using an input/output (I/O) interceptor with dynamic heterogeneous flush control logic embedded within a storage stack of a computerized device such as a server.
Performance demands on computing devices such as enterprise servers are increasing. While processor and networking performance such computer server devices have steadily improved, the local storage stack within such mobile devices has not advanced as quickly. Computers today often have multiple heterogeneous storage devices within or otherwise associated with a server or server rack. Flush requests that are commonly used to ensure data consistency have become barriers to performance because I/Os are blocked during the flush command, and there is no distinction made between different kinds of storage devices, which further impacts the overall performance of these systems. Consequently, the storage stack and associated local storage devices of such server systems have become a performance bottleneck. Embodiments of the inventive concept address these and other limitations in the prior art.
Embodiments may include an input/output (I/O) interceptor logic section. The I/O interceptor logic section may include an I/O interface communicatively coupled with a storage stack and configured to intercept a plurality of write I/Os and a plurality of flush requests from an application. The I/O interceptor logic section may include a plurality of write holding buffers each associated with a corresponding non-volatile storage device from among a plurality non-volatile storage devices, wherein each of the write holding buffers is configured to receive a subset of the plurality of write I/Os from the I/O interface and to store the subset of write I/Os. The I/O interceptor logic section may include a dynamic heterogeneous flush control logic section configured to receive the plurality of flush requests from the I/O interface, to communicate write I/O completion of the plurality of write I/Os to the application without the plurality of write I/Os having been written from the plurality of write holding buffers to the corresponding non-volatile storage device from among the plurality of non-volatile storage devices.
Embodiments man include an input/output (I/O) interceptor logic section comprising an I/O interface communicatively coupled with a storage stack and configured to intercept a plurality of write I/Os and a plurality of flush requests from an application. The I/O interceptor logic section may include a plurality of write holding buffers configured to receive the plurality of write I/Os from the I/O interface and to store the plurality of write I/Os. Each of the write holding buffers may include a multiple-buffer holding queue configured to hold a plurality of write holding sub-buffers. The I/O interceptor logic section may include a dynamic heterogeneous flush control logic section configured to receive the plurality of flush requests from the I/O interface, to communicate write I/O completion of the plurality of write I/Os to the application without the plurality of write I/Os having been written to a plurality of non-volatile storage devices, and to cause the multiple-buffer holding queue to empty the plurality of write I/Os from the plurality of write holding sub-buffers to the plurality of non-volatile storage devices responsive to a number of flush requests from among the plurality of flush requests being equal to or greater than a dynamic flush threshold.
Embodiments may include a computer-implemented method for intercepting input/outputs (I/Os) from an application using an I/O interceptor logic section, the method comprising intercepting, by an I/O interface of the I/O interceptor logic section, a plurality of write I/Os and a plurality of flush requests from the application. The method may include storing, by a plurality of write holding buffers, the plurality of write I/Os intercepted by the I/O interface. The method may include receiving, by a dynamic heterogeneous flush control logic section, the plurality of flush requests from the I/O interface. The method may include communicating, by the dynamic heterogeneous flush control logic section, write I/O completion of the plurality of write I/Os to the application without the plurality of write I/Os having been written to a plurality of non-volatile storage devices, wherein each of the write holding buffers is associated with a corresponding one of the non-volatile storage devices. The method may include causing, by the dynamic heterogeneous flush control logic section, the write I/Os to be written to the plurality of non-volatile storage devices responsive to a number of flush requests from among the plurality of flush requests being equal to or greater than a dynamic flush threshold.
The foregoing and additional features and advantages of the present inventive principles will become more readily apparent from the following detailed description, made with reference to the accompanying figures, in which:
Reference will now be made in detail to embodiments of the inventive concept, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth to enable a thorough understanding of the inventive concept. It should be understood, however, that persons having ordinary skill in the art may practice the inventive concept without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first power switch cell could be termed a second power switch cell, and, similarly, a second power switch cell could be termed a first power switch cell, without departing from the scope of the inventive concept.
The terminology used in the description of the inventive concept herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the inventive concept. As used in the description of the inventive concept and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The components and features of the drawings are not necessarily drawn to scale.
As shown in
The I/O interceptor logic section 120 may include an I/O interface 155, one or more temporary write holding buffers 160, a re-order logic section 165, and a dynamic heterogeneous flush control logic section 170. The I/O interface 155 may be communicatively coupled with the storage stack 125. The I/O interceptor logic section 120 may intercept all I/Os, including flush requests, from the application (e.g., 132). For example, the I/O interface 155 may intercept write I/Os, read I/Os, and flush requests from the application (e.g., 132). The temporary write holding buffers 160 may receive the write I/Os from the I/O interface 155 and may store the write I/Os. The temporary holding buffers 160 may be pre-allocated from free system memory.
The re-order logic section 165 may be communicatively coupled to the temporary write holding buffers 160. The re-order logic section 165 may change an order of the write I/Os stored in each of the temporary write holding buffers 160. The re-order logic section 165 may combine the re-ordered write I/Os into a combined write I/O for each of the temporary write holding buffers 160, as further described below. The dynamic heterogeneous flush control logic section 170 may receive the flush requests from the I/O interface 155. The dynamic heterogeneous flush control logic section 170 may communicate write I/O completion of the write I/Os to the application (e.g., 132) without the write I/Os actually having been written to any of the one or more non-volatile storage devices 135.
From the application's perspective, the write I/O has completed, while in reality, the write I/O has not been committed to non-volatile storage at this point in time. In other words, the dynamic heterogeneous flush control logic section 170 may communicate write I/O completion of the write I/Os to the application 132 before the write I/Os have actually been written to any of the one or more non-volatile storage devices 135. The dynamic heterogeneous flush control logic section 170 may cause the combined write I/O to be written to corresponding non-volatile storage devices 135 responsive to a number of flush requests being equal to or greater than a dynamic flush threshold, or other criteria, as also further described below.
The attributes of the non-volatile storage devices used to hold the buffers 162 can be used to determine dynamic flushes, as further described below. For example, using non-volatile devices such as PRAM, MRAM, or 3D XPoint devices to incorporate the write holding buffers 162, there is less risk of data loss, and in some cases differing/higher capacity. Consequently, the write holding buffers 162 can be flushed less often. Even though the physical non-volatile devices that store the write holding buffers 162 are among the physical hardware devices 115, the logical control of the non-volatile write holding buffers 162 may remain with the I/O interceptor logic section 120, which is instantiated on the host 105.
In some embodiments, the physical devices (e.g., PRAM, MRAM, or 3D XPoint memory devices) that store the write holding buffers 162 need not be the same as the physical devices 135 used as final flush targets. In some embodiments, the physical devices (e.g., PRAM, MRAM, or 3D XPoint memory devices) that store the write holding buffers 162 can be the same as at least some of the physical devices 135 used as final flush targets, in a hierarchical tiered system.
The detailed description is not repeated for components having the same reference numerals as the system stack 100 described above.
In some embodiments, each of the temporary write holding buffers (e.g., 160A, 160B, through 160N) may be associated with or otherwise coupled to a corresponding non-volatile storage device (e.g., 135A, 135B, through 135N). In addition, each of the temporary write holding buffers (e.g., 160A, 160B, through 160N) may receive a subset of the write I/Os from the I/O interface 155 and temporarily store the subset of write I/Os. The dynamic heterogeneous flush control logic section 170 may receive flush requests from the I/O interface 155. The dynamic heterogeneous flush control logic section 170 may communicate write I/O completion of the write I/Os to the application (e.g., 132 of
In some embodiments, each of the non-volatile storage devices (e.g., 135A, 135B, through 135N) may be of a different kind, i.e., of a heterogeneous nature. For example, the non-volatile storage devices (e.g., 135A, 135B, through 135N) may correspond to the various different kinds of non-volatile storage devices 135 described above with reference to
The dynamic heterogeneous flush control logic section 170 may dynamically vary a flush threshold, as further described in detail below, dependent on which kind of non-volatile storage device the write I/Os are being sent to, and/or dependent on which kind of connection the write I/Os are being sent over. Put differently, the dynamic flush threshold may be dependent on a kind of the corresponding non-volatile storage device or connection to which the combined write I/O is sent. In other words, a flush threshold can be variable based on the characteristics of the underlying storage or connection, including characteristics of underlying cache and/or the characteristics of the underlying final physical storage device. The rate of actual committed flushes may be varied depending on the type of underlying physical storage to which the write I/Os and flushes are being sent. The actual committed flushes may be varied from drive to drive, or from one non-volatile storage device to another.
The dynamic heterogeneous flush control logic section 170 may include a plurality of flush thresholds (e.g., 172A, 172B, through 172N) each associated with a corresponding one of the non-volatile storage devices (e.g., 135A, 135B, through 135N). In some embodiments, each of the flush thresholds (e.g., 172A, 172B, through 172N) is different relative to each other, and based on the kind of the underlying non-volatile storage device or connection with which it is associated. The dynamic heterogeneous flush control logic section 170 may cause the write I/Os from each of the temporary write holding buffers (e.g., 160A, 160B, through 160N) to be written to the corresponding non-volatile storage device (e.g., 135A, 135B, through 135N) responsive to a number of flush requests being equal to or greater than a corresponding flush threshold (e.g., 172A, 172B, through 172N).
The temporary write holding buffers 160 may include a first temporary write holding buffer 160A, a second temporary write holding buffer 160B, and a third temporary write holding buffer 160N, and so forth. The non-volatile storage devices 135 can include a first non-volatile storage device 135A, a second non-volatile storage device 135B, a third non-volatile storage device 135N, and so forth. The first temporary write holding buffer 160A may be associated with the first non-volatile storage device 135A. The second temporary write holding buffer 160B may be associated with the second non-volatile storage device 135B. The third temporary write holding buffer 160C may be associated with the third non-volatile storage device 135N, and so forth.
The dynamic heterogeneous flush control logic section 170 may cause the subset of write I/Os from the first temporary write holding buffer 160A to be written to the first non-volatile storage device 135A responsive to a number of flush requests associated with the first non-volatile storage device 135A being equal to or greater than a corresponding first flush threshold 172A. The dynamic heterogeneous flush control logic section 170 may cause the subset of write I/Os from the second temporary write holding buffer 160B to be written to the second non-volatile storage device 135B responsive to a number of flush requests associated with the first non-volatile storage device 135B being equal to or greater than a corresponding second flush threshold 172B. The dynamic heterogeneous flush control logic section 170 may cause the subset of write I/Os from the first temporary write holding buffer 160N to be written to the Nth non-volatile storage device 135N responsive to a number of flush requests associated with the Nth non-volatile storage device 135N being equal to or greater than a corresponding Nth flush threshold 172N, and so forth.
The attributes of the non-volatile storage devices used to hold the buffers (e.g., 162A, 162B, through 162N) can be used to determine the dynamic flush thresholds (e.g., 172A, 172B, through 172N). For example, using non-volatile devices such as PRAM, MRAM, or 3D XPoint devices to incorporate the write holding buffers (e.g., 162A, 162B, through 162N), there is less risk of data loss due to power failure, and in some cases these devices have differing/higher capacity. Consequently, the write holding buffers (e.g., 162A, 162B, through 162N) can be flushed less often. Otherwise, the I/O interceptor logic section 120 can operate in a similar fashion to that described above with reference to
For a given write holding buffer (e.g., 160A or 162A), the I/O interface 155 may intercept write I/Os including, for example, data write I/Os 305 (i.e., W01 through Wn3), metadata write I/Os 310 (i.e., W04 through Wn6), data write I/Os 315 (i.e., W07 through Wn9), metadata write I/Os 320 (i.e., W010 through Wn12), and flush requests (e.g., F0 through Fn) from the application (e.g., 132). The write holding buffer (e.g., 160A or 162A) may receive the write I/Os from the I/O interface 155 and may store the write I/Os. If a read I/O is received from the application while the data is resident in the write holding buffer (e.g., 160A or 162A), the I/O interceptor logic section 120 may serve the read I/O request from the copy that is resident in the write holding buffer (e.g., 160A or 162A), or alternatively, from a re-mapped memory location, as further described below. The re-order logic section 165 may change an order of the write I/Os stored in each of the write holding buffers 160 or 162, as further described below. The re-order logic section 165 may convert random write I/Os to sequential write I/Os. The re-order logic section 165 may combine the re-ordered write I/Os (e.g., 305, 310, 315, and 320) into a combined write I/O (e.g., 325) for each of the write holding buffers (e.g., 160A or 162A).
The dynamic heterogeneous flush control logic section 170 may receive the flush requests (e.g., F0 through Fn) from the I/O interface 155. The dynamic heterogeneous flush control logic section 170 may communicate write I/O completion via corresponding completion messages (e.g., 330) of the write I/Os to the application (e.g., 132) without the write I/Os actually having been written to any of the one or more non-volatile storage devices 135. In other words, the dynamic heterogeneous flush control logic section 170 may substantially immediately reply to each flush request so that the application 132 may continue to send additional write I/Os without the need to wait for a corresponding flush request to actually be completed to one of the non-volatile storage devices 135, thereby significantly improving performance of the application 132. In other words, the dynamic heterogeneous flush control logic section 170 may communicate write I/O completion 330 of the write I/Os to the application 132 before the write I/Os have actually been written to any of the one or more non-volatile storage devices 135. Since the write I/O is stored in the write holding buffers 160 or 162, the dynamic heterogeneous flush control logic section 170 may cause at a later time the combined write I/O 325 to be actually written in a committed flush CFn to the corresponding non-volatile storage device 135. In other words, the dynamic heterogeneous flush control logic section 170 may implement a selective flush technique. The committed flush CFn may be a synchronous operation. Since the combined write I/O size is larger than the individual write I/Os, the storage performance is improved by reducing the overhead associated with each write I/O had they been sent to the non-volatile storage device independently. In addition, since the write I/Os may be completed from system memory, which has a significantly higher performance (i.e., high throughput and low latency) than the actual non-volatile storage devices, the performance of the application is significantly improved.
Moreover, since the committed flush CFn of the combined write I/O 325 occurs on or shortly following a flush boundary Fn (e.g., responsive to a number of flush requests being equal to or greater than a dynamic flush threshold), the data will maintain its consistency. Even in the event of a sudden power loss, the data will remain consistent on the non-volatile storage devices 135, although there may be loss of data. Loss of data is acceptable to applications, because the applications may read the data that is existent on the non-volatile storage device, and recover or otherwise roll back its state based on the stored data. But data inconsistency is usually not tolerated by applications and may cause unpredictable behavior, or even cause the application to become inoperable. Embodiments of the inventive concept ensure data consistency by accumulating write IOs between committed flushes (e.g., CFO and CFn), and then flushing the data in a combined write I/O to the non-volatile storage, as described herein.
The one or more non-volatile storage devices 135 may return a committed flush confirmation message (e.g., 175), which may be received by the I/O interceptor logic section 120, and used to confirm the completion of each committed flush transaction to the non-volatile storage, thereby providing synchronous committed flush operations. In other words, during a particular synchronous committed flush operation, other write IOs and non-committed flush requests that are not a part of the particular committed flush operation are accumulated and not permitted to be actually flushed to the non-volatile storage until a subsequent synchronous committed flush operation.
A sudden power loss may occur at any point in time, and the data will remain consistent. In other words, there will always be metadata paired with data on the one or more non-volatile storage devices 135. In other words, there may never be a situation in which metadata exists on the one or more non-volatile storage devices 135 without corresponding data. Consequently, in accordance with embodiments of the inventive concept, data consistency is maintained while significantly boosting performance of the application and storage stack.
In addition, a window of opportunity for re-ordering write IOs is expanded because what are typically committed flush requests may be transformed into non-committed flush requests. Typically, write IOs may only be re-ordered between flush boundaries (e.g., between F0 and F1). However, in accordance with embodiments of the inventive concept, typical flush requests are transformed into non-committed flush requests, and write IOs may be re-ordered across the non-committed flush boundaries (e.g., between F0 and Fn and across F1, F2, and F3) because the actual committed flushes (e.g., CFO and CFn) occur on an expanded time scale. In other words, the number of write IOs is greater between two committed flushes (e.g., CFO and CFn) than the number of write IOs between two non-committed flush requests (e.g., F0 and F1, F1 and F2, or F2 and F3, etc.), which occur more frequently and on a shorter time scale. By expanding the window of opportunity to re-order IOs, additional performance increases may be achieved because the IOs may be ordered in such a way that they are more efficient to write to the non-volatile storage devices, and are also more efficient to write in that they are grouped into a single combined (i.e., larger) write IO.
The dynamic heterogeneous flush control logic section 170 may cause the combined write I/O 325 to be written to the corresponding non-volatile storage devices 135 responsive to a number of flush requests (e.g., F), F1, F2, F3, and Fn) from among the flush requests being equal to or greater than a dynamic flush threshold. The dynamic flush threshold may be an integer of 2, 3, 4, 5, 8, 10, 16, 20, 32, 64, 100, and so forth, and may be variable or otherwise change based on the particular non-volatile storage device (e.g., 135A) to which the write I/O 325 is being sent. The dynamic flush threshold, which controls how many flushes to accumulate before causing a committed flush CFn may be configurable for each kind of non-volatile storage device (e.g., 135A), and may be predefined (e.g., predetermined and set) prior to operation. In some embodiments, the dynamic flush threshold may be a configurable setting, which may be modified by a user or system administrator. In some embodiments, the dynamic flush threshold may be automatically determined by the dynamic heterogeneous flush control logic section 170 based on characteristics of the underlying physical non-volatile storage devices.
Alternatively or in addition, the dynamic heterogeneous flush control logic section 170 may cause the committed flush CFn to write the combined write I/O 325 to the one or more non-volatile storage devices 135 responsive to a threshold amount of data being accumulated. In other words, when a threshold amount of data is reached, the dynamic heterogeneous flush control logic section 170 may cause the committed flush CFn at the next flush request. The threshold amount of data may vary depending on the characteristics of the underlying physical non-volatile storage devices. Alternatively or in addition, the dynamic heterogeneous flush control logic section 170 may cause the committed flush CFn to write the combined write I/O 325 to the one or more non-volatile storage devices 135 responsive to an expiration of a predefined period of time. The predefined period of time may be, for example, on the order of seconds, such as 5 seconds, 10 seconds, 30 seconds, 60 seconds, and so forth. The threshold period of time may vary depending on the characteristics of the underlying physical non-volatile storage devices. This reduces the chances of data loss in the event of a sudden power loss, while boosting performance and maintaining data consistency. Alternatively or in addition, the dynamic heterogeneous flush control logic section 170 may cause the committed flush CFn to write the combined write I/O 325 to the one or more non-volatile storage devices 135 responsive to a first criteria that the threshold amount of data being accumulated, and then subsequently, responsive to a second criteria that the dynamic flush threshold is satisfied. In other words, the committed flush CFn may occur when either or both criteria are satisfied.
The I/O interface 155 of the I/O interceptor logic section 120 may receive a first subset of data write I/Os (e.g., 305), a first flush request (e.g., F1), a first subset of metadata write I/Os (e.g., 310), a second flush request (e.g., F2), a second subset of data write I/Os (e.g., 315), a third flush request (e.g., F3), a second subset of metadata write I/Os (e.g., 320), and a fourth flush request (e.g., Fn). The re-order logic section 165 may change the order of write I/Os within the first subset of the data write I/Os 305, the first subset of the metadata write I/Os 310, the second subset of the data write I/Os 315, and the second subset of the metadata write I/Os 320. In other words, the re-order logic section 165 may change the order of the individual write I/Os within each subset and/or change the order of the individual write I/Os across the various subsets. The re-order logic section 165 may combine the first subset of the data write I/Os 305, the first subset of the metadata write I/Os 310, the second subset of the data write I/Os 315, and the second subset of the metadata write I/Os 320 into the combined write I/O 325. The re-order logic section 165 may insert or otherwise attach a header HDR and footer FTR to the combined write I/O 325, as further described below.
The re-order logic section 165 may change the order of the various write I/Os so that they may be arranged in a combined write I/O 325 in such a manner that each of LBAs (e.g., LBA 1, LBA 2, LBA 3, etc.) associated with a corresponding one of the write I/Os (e.g., Wn12, W010, W111 and so forth) of the combined write I/O 325 is ordered in an ascending or descending arrangement. For example, the write I/Os may originally arrive in the following order having the following associated LBAs: W01/LBA 6, W12/LBA 12, Wn3/LBA 9, W04/LBA 7, W15/LBA 10, Wn6/LBA 11, W07/LBA 5, W18/LBA 4, Wn9/LBA 8, W010/LBA 2, W111/LBA 3, and Wn12/LBA 1. It will be understood that these LBA values are by way of example and not limitation as other LBAs not specifically described herein may fall within the various embodiments of the inventive concept described herein. The re-order logic section 165 may change the order of the individual write I/Os within the write holding buffer 160 or 162 to have the following order: Wn12/LBA 1, W010/LBA 2, W111/LBA 3, W18/LBA 4, W07/LBA 5, W01/LBA 6, W04/LBA 7, Wn9/LBA 8, Wn3/LBA 9, W15/LBA 10, Wn6/LBA 11, and W12/LBA 12, as shown in
The re-order logic section 165 may change the order of the individual write I/Os by pre-pending a header HDR and appending a footer FTR, and re-mapping the individual write I/Os to the re-mapped memory section 405. In other words, the re-order logic section 165 may copy the individual write I/Os, a header HDR, and a footer FTR, to form the combined write I/O 325, such that each of a plurality of logical block addresses (LBAs) (e.g., 1, 2, 3, etc.) associated with a corresponding one of the plurality of write I/Os (e.g., Wn12, W010, W111, etc.) of the combined write I/O 325 may be arranged in ascending or descending order, and such that each of the plurality of write I/Os (e.g., Wn12, W010, W111, etc.) of the combined write I/O 325 are physically contiguous in the re-mapped memory section 405 to another of the plurality of write I/OS (e.g., Wn12, W010, W111, etc.) of the combined write I/O 325. The re-order logic section 165 may convert random write I/Os to sequential write I/Os.
For example, the combined write I/O 325 within the re-mapped memory section 405 may have the individual write I/Os re-mapped and arranged in the following order: HDR/LBA 0, Wn12/LBA 1, W010/LBA 2, W111/LBA 3, W18/LBA 4, W07/LBA 5, W01/LBA 6, W04/LBA 7, Wn9/LBA 8, Wn3/LBA 9, W15/LBA 10, Wn6/LBA 11, W12/LBA 12, and FTR/LBA 13, as shown in
In some embodiments, the re-mapped memory section 405 may be known only to the I/O interceptor logic section 120. The header HDR and/or footer FTR may include re-mapping translation information. Alternatively or in addition, the header HDR and/or footer FTR may include information that indicates that the combined write I/O 325 is valid. Any combined write I/O 325 having a header HDR but no footer FTR on the one or more non-volatile storage devices 135 may be considered invalid. Such a scenario may be caused by a sudden power loss during the committed flush operation CFn. In this scenario, the combined write I/O 325 may be determined to be invalid and may be discarded. When a valid combined write I/O 325 is later retrieved from the one or more non-volatile storage devices 135 to be read in a read I/O operation, the re-mapping translation information stored in the header HDR and/or the footer FTR may be used by the I/O interceptor logic section 120 to provide expected data and/or associated expected LBA information to the application 132.
The dynamic heterogeneous flush control logic section 170 may cause the physically contiguous combined write I/O 325 to be written to the corresponding non-volatile storage devices 135 responsive to a number of flush requests being equal to or greater than the dynamic flush threshold, or other criteria, as further described below.
The re-order logic section 165 need not be included in the I/O interceptor logic 120. Rather, in this embodiment, the dynamic heterogeneous flush control logic section 170 may manage a plurality of write holding sub-buffers, such as Sub-Buffer 0, Sub-Buffer 1, Sub-Buffer 2, and Sub-Buffer N, as shown in
The dynamic heterogeneous flush control logic section 170 may receive the plurality of flush requests (F0, F1, F2, etc.) from the I/O interface 155 (of
The dynamic heterogeneous flush control logic section 170 may cause the first subset of the data write I/Os (e.g., 305 of
The dynamic heterogeneous flush control logic section 170 may cause the multiple-buffer holding queue 505 to empty the write I/Os from the write holding sub-buffers (e.g., Sub-Buffer 0, Sub-Buffer 1, Sub-Buffer 2, and Sub-Buffer N) to the corresponding non-volatile storage device (e.g., 135A) responsive to a number of flush requests being equal to or greater than a dynamic flush threshold, and in the order received. In this embodiment, the re-ordering and re-mapping of the write I/Os is avoided, and no additional headers or footers are needed. On the other hand, the performance increase is not as pronounced as the embodiments described above because the LBAs are sent to the one or more non-volatile storage devices 135 in an essentially random fashion. In the event of a sudden power loss, data consistency is still maintained because the order in which the application intended to write the data is preserved.
At 1025, a dynamic heterogeneous flush control logic section may receive the plurality of flush requests from the I/O interface. At 1030, the dynamic heterogeneous flush control logic section may communicate write I/O completion of the plurality of write I/Os to the application without the plurality of write I/Os having been written to a non-volatile storage device. In other words, the dynamic heterogeneous flush control logic section may communicate write I/O completion of the write I/Os to the application before the write I/Os have actually been written to any of the one or more non-volatile storage devices 135. At 1035, the dynamic heterogeneous flush control logic section may cause the combined write I/O to be written to the corresponding non-volatile storage device responsive to a number of flush requests from among the plurality of flush requests being equal to or greater than a flush threshold, a threshold amount of data being accumulated, and/or an expiration of a predefined time period.
At 1120, the dynamic heterogeneous flush control logic section may receive the plurality of flush requests from the I/O interface. At 1125, the dynamic heterogeneous flush control logic section may communicate write I/O completion of the plurality of write I/Os to the application without the plurality of write I/Os having been written to a non-volatile storage device. In other words, the dynamic heterogeneous flush control logic section may communicate write I/O completion of the write I/Os to the application before the write I/Os have actually been written to any of the one or more non-volatile storage devices 135. At 1130, the dynamic heterogeneous flush control logic section may cause the multiple-buffer holding queue to empty the plurality of write I/Os from the plurality of write holding sub-buffers to the non-volatile storage device responsive to at least one of a number of flush requests being equal to or greater than a dynamic flush threshold, a threshold amount of data being accumulated, or an expiration of a predefined time period.
If the computing system 1200 is a mobile device, the battery 1235 may power the computing system 1200, and battery drain may be reduced by implementation of the embodiments of the inventive concept described herein due more efficient writing of data to storage. Although not shown in
In example embodiments, the computing system 1200 may be used as computer, computer server, server rack, portable computer, Ultra Mobile PC (UMPC), workstation, net-book, PDA, web tablet, wireless phone, mobile phone, smart phone, e-book, PMP (portable multimedia player), digital camera, digital audio recorder/player, digital picture/video recorder/player, portable game machine, navigation system, black box, 3-dimensional television, a device capable of transmitting and receiving information at a wireless circumstance, one of various electronic devices constituting home network, one of various electronic devices constituting computer network, one of various electronic devices constituting a telematics network, RFID, or one of various electronic devices constituting a computing system.
The following discussion is intended to provide a brief, general description of a suitable machine or machines in which certain aspects of the inventive concept may be implemented. Typically, the machine or machines include a system bus to which is attached processors, memory, e.g., random access memory (RAM), read-only memory (ROM), or other state preserving medium, storage devices, a video interface, and input/output interface ports. The machine or machines may be controlled, at least in part, by input from conventional input devices, such as keyboards, mice, etc., as well as by directives received from another machine, interaction with a virtual reality (VR) environment, biometric feedback, or other input signal. As used herein, the term “machine” is intended to broadly encompass a single machine, a virtual machine, or a system of communicatively coupled machines, virtual machines, or devices operating together. Exemplary machines include computing devices such as personal computers, workstations, servers, portable computers, handheld devices, telephones, tablets, etc., as well as transportation devices, such as private or public transportation, e.g., automobiles, trains, cabs, etc.
The machine or machines may include embedded controllers, such as programmable or non-programmable logic devices or arrays, Application Specific Integrated Circuits (ASICs), embedded computers, smart cards, and the like. The machine or machines may utilize one or more connections to one or more remote machines, such as through a network interface, modem, or other communicative coupling. Machines may be interconnected by way of a physical and/or logical network, such as an intranet, the Internet, local area networks, wide area networks, etc. One skilled in the art will appreciate that network communication may utilize various wired and/or wireless short range or long range carriers and protocols, including radio frequency (RF), satellite, microwave, Institute of Electrical and Electronics Engineers (IEEE) 545.11, Bluetooth®, optical, infrared, cable, laser, etc.
Embodiments of the present inventive concept may be described by reference to or in conjunction with associated data including functions, procedures, data structures, application programs, etc. which when accessed by a machine results in the machine performing tasks or defining abstract data types or low-level hardware contexts. Associated data may be stored in, for example, the volatile and/or non-volatile memory, e.g., RAM, ROM, etc., or in other storage devices and their associated storage media, including hard-drives, floppy-disks, optical storage, tapes, flash memory, memory sticks, digital video disks, biological storage, etc. Associated data may be delivered over transmission environments, including the physical and/or logical network, in the form of packets, serial data, parallel data, propagated signals, etc., and may be used in a compressed or encrypted format. Associated data may be used in a distributed environment, and stored locally and/or remotely for machine access.
Having described and illustrated the principles of the inventive concept with reference to illustrated embodiments, it will be recognized that the illustrated embodiments may be modified in arrangement and detail without departing from such principles, and may be combined in any desired manner. And although the foregoing discussion has focused on particular embodiments, other configurations are contemplated. In particular, even though expressions such as “according to an embodiment of the inventive concept” or the like are used herein, these phrases are meant to generally reference embodiment possibilities, and are not intended to limit the inventive concept to particular embodiment configurations. As used herein, these terms may reference the same or different embodiments that are combinable into other embodiments.
Embodiments of the inventive concept may include a non-transitory machine-readable medium comprising instructions executable by one or more processors, the instructions comprising instructions to perform the elements of the inventive concepts as described herein.
The foregoing illustrative embodiments are not to be construed as limiting the inventive concept thereof. Although a few embodiments have been described, those skilled in the art will readily appreciate that many modifications are possible to those embodiments without materially departing from the novel teachings and advantages of the present disclosure. Accordingly, all such modifications are intended to be included within the scope of this inventive concept as defined in the claims.
Number | Date | Country | |
---|---|---|---|
62426421 | Nov 2016 | US |