This disclosure relates to input/output (I/O) infrastructure and, in particular, to a log-enabled cache.
Write-through cache implementations provide high levels of security, but can suffer performance problems in write-intensive environments. Write-back cache implementations can improve write performance, but may be subject to data loss in some certain failure modes.
Disclosed herein are embodiments of a method for persistent cache logging. The disclosed methods may comprise one or more machine-executable operations and/or steps. The disclosed operations and/or steps may be embodied as program code stored on a computer readable storage medium. Accordingly, embodiments of the methods disclosed herein may be embodied as a computer program product comprising a computer readable storage medium storing computer usable program code executable to perform one or more method operations and/or steps.
Embodiments of the disclosed methods may comprise storing data directed to a backing store in a cache, recording an ordered sequence of log entries on a persistent storage medium, wherein each log entry corresponds to a set of one or more storage operations, and/or maintaining associations between data stored in the cache and the log entries, at least until the data stored in the cache is stored on the backing store.
In some embodiments, the method may further include identifying data stored in the cache that is associated with a selected one of a plurality of log periods, each log period comprising a set of one or more log entries and/or writing the identified data from the cache to the backing store. Some embodiments of the method may further include marking the backing store with an indicator of the selected log period in response to writing the identified data from the cache to the backing store.
The method may also comprise detecting a failure condition, which may result in the loss of cache data. In response, the method may comprise identifying a set of log entries in the log corresponding to data that has not been written to the backing store, and writing data from the identified set of log entries to the backing store. Identifying the set of log entries may comprise determining an indicator of a last log period committed to the backing store. In some embodiments, the method includes queuing write operations corresponding to the log entries in the identified set in a buffer, removing write operations made redundant by one or more other write operations from the buffer, and writing data to the backing store corresponding to the remaining write operations in the buffer. The method may be further configured to admit data of the access entries into the cache.
Disclosed herein are embodiments of an apparatus, which may comprise a storage request module configured to identify storage requests directed to a backing store, a cache storage module configured to write data to a cache in one or more cache write operations performed in response to the identified storage requests, and a log module configured to log the cache write operations on a non-volatile storage device, wherein the storage request module is configured to acknowledge completion of an identified storage request in response to logging a cache write operation corresponding to the identified storage request on the non-volatile storage device. The cache storage module may be configured to operate within a virtual machine and the log module may be configured to operate within a virtualization kernel.
The log module may be configured to store log entries corresponding to the identified storage requests sequentially within a physical address space of the non-volatile storage device. The log module may be further configured to divide the log into an ordered sequence of log segments, each log segment comprising a respective portion of the ordered log of cache write operations. The apparatus may further include a synchronization module configured to write data to the backing store, the data corresponding to the cache write operations within a selected one of the log segments. The synchronization module may be configured to combine a plurality of redundant cache write operations within the selected log segment that pertain to the same data identifier into a single, combined write operation to the backing store.
In some embodiments, the apparatus includes a log association module configured to maintain cache metadata configured to associate data stored in the cache with respective log segments corresponding to the cache write operations of the data. The synchronization module may be configured to identify cache data associated with the selected log segment by use of the cache metadata and to write the identified cache data to the backing store. The synchronization module may be configured to identify a last log segment committed to the backing store and to select the log segment to commit to the backing store based on the determined last log segment. The synchronization module may be further configured to record an indication that the selected log segment has been committed to the backing store in response to writing the identified cache data associated to the backing store. In some embodiments, the synchronization module is further configured to reclaim the selected log segment in response to writing the data corresponding to the cache write operations within the selected log segments.
The log may comprise an ordered sequence of entries, each entry corresponding to a respective cache write operation. The apparatus may include a recovery module configured to access the log entries from a starting entry in the log to a last entry in the log and to implement write operations corresponding to the accessed log entries, wherein the starting entry is identified based on a synchronization state of the backing store.
Disclosed herein are embodiments of a system, which may comprise a cache virtualization module configured to cache data of each of a plurality of virtual machines in a cache, a cache log module configured to maintain a persistent, ordered log of write operations performed on the cache within respective log intervals, and a cache management system of one of the plurality of virtual machines configured to associate cached data of the virtual machine with respective log intervals at least until the data stored in the cache is stored on the backing store.
In some embodiments, each of the plurality of virtual machines comprises a respective cache management system configured to manage cache data of the virtual machine, including mappings between virtual machine cache data and respective log intervals. The system may further include a log synchronization module configured to identify virtual machine cache data corresponding to one or more log periods by use of the cache management systems of the virtual machines and to the identified virtual machine cache data to a backing store. The one or more log periods may comprise a plurality of write operations pertaining to a particular logical identifier, and the cache management system may be configured to identify cache data corresponding to a most recent one of the plurality of write operations pertaining to the particular logical identifier within the one or more log periods.
The cache log module may be configured to provide an identifier of a current log period to the plurality of virtual machines, and the cache management systems may be configured to associate cache data corresponding to cache write requests with the provided identifier. In some embodiments, the cache log module is further configured to provide an updated identifier to a virtual machine response to incrementing the current log period before completion of a cache write request of the virtual machine. The cache management system may be configured to associate cache data of the write request with the updated identifier.
The cache virtualization module may be configured to indicate that a request to cache data of a virtual machine in the cache storage is complete in response to determining that an entry corresponding to the request is stored in the persistent, ordered log.
This disclosure includes and references the accompanying drawings, which provide a more particular description of the embodiments disclosed herein. The disclosure, however, is not limited to the particular embodiments depicted in the figures. The teachings of the disclosure may be utilized and/or adapted to other embodiments and/or changes may be made to the disclosed embodiments, without departing from the scope of the disclosure.
The system 100 may comprise an operating environment 103 configured to manage hardware resources of the computing device 102, including the processing resources 204, storage resources 205, memory resources 206, and/or I/O resources 207 disclosed above. The operating environment 103 may comprise an operating system. In some embodiments, the operating environment 103 is a “bare metal” operating environment configured to directly manage hardware resources. In other embodiments, the operating environment 103 may be a virtualized operating environment configured to manage virtualized resources of a virtualization layer, such as a hypervisor, or the like.
The operating environment 103 may comprise one or more storage client(s) 104, which may include, but are not limited to: user-level applications, kernel-level applications, file systems, databases, and the like. The storage client(s) 104 may perform I/O operations by use of, inter alia, an I/O stack 111 of the operating environment 103. The I/O stack 111 may define a storage architecture in which storage services, such as file system drivers, volume drivers, disk drivers, and the like, are deployed. Storage services may be configured to interoperate by issuing and/or consuming I/O requests within various layers 113A-N of the I/O stack 111. The layers 113A-N may include, but are not limited to: a file layer, a volume layer, a disk layer, a SCSI layer, and so on.
The system 100 may comprise a cache module 120 configured to cache data of one or more of the storage clients 104. The cache module 120 may comprise a cache management system (CMS) 220, configured to manage cache operations within the operating environment 103, and a cache storage module 213 configured to manage cache resources, such as the cache 216. In some embodiments, CMS 220 may comprise a storage request module 222 configured to monitor I/O requests within the I/O stack 111. The CMS 220 may service selected I/O requests by use of the cache storage module 213, which may include, but is not limited to: admitting data into the cache 216, reading data from cache 216, and the like. As disclosed in further detail herein, admission into the cache may be determined by a cache policy and/or in accordance with the availability of cache resources.
The CMS 220 may be configured to manage cache operations by use of cache tags 221. As used herein, a “cache tag” refers to metadata configured to, inter alia, associate data that has been admitted into the cache with a storage location of the data within the cache 216. Accordingly, in some embodiments, cache tags 221 comprise mappings between data identifier(s) of the storage clients 104 (e.g., data identifiers, logical identifiers, logical addresses, primary storage addresses, and the like) and one or more cache storage locations. Accordingly, the cache tags 221 may comprise a translation layer between a namespace of a storage client 104, operating environment 103, and/or I/O stack 111 and the CMS 220. The cache tags 221 may represent an allocation of cache resources to a particular storage client 104, computing device 102, cache layer, and/or virtual machine (described in further detail below). Cache tags 221 may comprise cache metadata, such as access metrics, cache mode, and so on.
The cache storage module 213 may be configured to store data that has been admitted into the cache (e.g., by the CMS 220) within a cache 216. The cache 216 may comprise volatile storage resources (e.g., DRAM), battery-backed RAM, non-volatile storage media, solid-state storage media, and/or the like.
In some embodiments, the cache module 120 is configured to operate in a write-through cache mode. As used herein, a “write-through” cache mode refers to a cache mode in which data is admitted into the cache by: a) storing the data in cache 216 and b) writing the data to the primary storage system 212. The operation (and corresponding I/O request) may not be considered to be complete and/or acknowledged until the data is written to the primary storage system 212. Therefore, the critical path of a cache write operation may comprise one or more write operations to the primary storage system 212. Write operations to the primary storage system 212 may take considerably longer than write operations to the cache 216. The performance differential may be even greater under certain types of load conditions; for example, the performance of the primary storage system 212 may further degrade under highly random write conditions. As used herein, a “random” write operation refers to a storage operation to write data to an arbitrary physical storage location of a storage device (e.g., primary storage system 212). Therefore, although write-through cache modes may provide security against data loss, write performance can suffer. Moreover, write-through cache modes may cause scaling problems due to write overheads imposed by a large number of storage clients and/or caching systems (e.g., in a virtualized environment, such as a virtual desktop infrastructure (VDI) environment or the like).
Other cache modes may ameliorate certain write performance. In some embodiments, for example, the cache module 120 may be configured to implement a write-back or copy-back cache mode in which data is admitted into the cache without writing the data through to the primary storage system 212. Accordingly, the critical path of write operations may comprise writing data to the cache 216 rather than waiting for the data to be written to the primary storage system 212. Modified cache data (e.g. dirty data) may be written to the primary storage system 212 outside of the critical path of I/O requests. However, these types of cache modes may be susceptible to data loss, which may occur if the contents of the cache and/or cache metadata (e.g., cache tags 221) are lost before write-back operations to the primary storage system 212 are completed.
In some embodiments, the cache module 120 may be configured to implement a logged cache mode. As used herein, a logged cache mode refers to a cache mode in which write operations to the primary storage system 212 are deferred (e.g., performed outside of the critical path of the I/O requests) and cache data is secured against data loss. Cache data may be secured against loss by use of, inter alia, the cache log module 313. The cache log module 313 may be configured to maintain a log 320 of cache operations performed on the cache 216 on a persistent, non-volatile storage device 316. The log 320 may comprise a record of the ordered sequence of cache storage operations performed on the cache 216. Requests to write data into the cache may be acknowledged as complete in response to logging the write request (e.g., storing a record of the operation within the log 320) as opposed to writing the corresponding data to the primary storage system 212 as in a write-through cache mode. As disclosed in further detail below, the cache log module 313 may be configured to log cache storage operations in an ordered sequence, based on the temporal order of the cache storage operations, which may result in converting “random” write operations to various portions of a physical address space to more efficient sequential write operations.
As disclosed above,
The host 202 may comprise one or more computing devices capable of hosting the virtual machines 208A-N. The host 202 may comprise, for example, processing resources 204, storage resources 205, memory resources 206, I/O resources 207, and the like, as disclosed above. Although
The virtualization kernel 210 may be configured to manage the operation of the virtual machines 208A-N operating on the host 202 as well as other components and services provided by the host 202. For example, the virtualization kernel 210 may be configured to handle various I/O operations associated with a primary storage system 212 or other I/O devices. The primary storage system 212 may be shared among the multiple virtual machines 208A-N and/or multiple hosts.
The system 101 may comprise a cache virtualization module 233 configured to provide caching services to the virtual machines 208A-N deployed on the host computing device 202. The cache virtualization module 233 may comprise a cache storage module 213, which may include a cache 216 and a cache log module 313, as disclosed above. The cache storage module 213 may further comprise a cache provisioner module 214 and map module 217. The cache provisioner module 214 may be configured to provision resources to the virtual machines 208A-N, which may comprise dynamically allocating cache resources and/or I/O operations (IOPS) to the virtual machines 208A-N. The cache provisioner module 214 may be configured to provide for sharing resources of the cache 216 between multiple virtual machines 208A-N.
In some embodiments, one or more of the virtual machines 208A-N may comprise an I/O driver 218A-N and a cache management system (CMS) 220A-N. The I/O driver 218A-N may be configured to intercept I/O operations of the associated virtual machine 208A-N (within respective I/O stacks 111 of the virtual machines 208A-N) and to direct the I/O operations to the corresponding CMS 220A-N for processing; selected I/O operations may be serviced using the cache virtualization module 233. In some embodiments, and as depicted in
In some embodiments, the virtual machines 208A-N may be configured to be transferred and/or relocated between hosts 202. The systems, apparatus, and methods disclosed herein may provide for transferring a “cache operating state” between hosts 202. As used herein, “cache operating state” or “cache state” refers to a current working state of a cache, which may include, but is not limited to: cache metadata, such as cache admission information (e.g., cache tags 221), access metrics, and so on; cache data (e.g., the contents of a cache 216); and the like. Transferring a cache operating state may, therefore, comprise retaining cache state on a first host 202 and/or transferring the retained cache state (including cache metadata and/or cache data) to another, different host 202. The virtualization kernel 210 (or other virtualization layer) may be configured to prevent virtual machines that reference local resources of the host 202, such as local disk storage or the like, from being transferred. Accordingly, virtual machines 208A-N may be configured to access the cache 216 as a shared storage resource and/or in a way that does not prevent the virtual machines 208A-N from being transferred between hosts 202.
One or more of the virtual machines 208A-N may comprise a CMS 220A-N, which may be configured to manage cache resources provisioned to the virtual machine 208A-N. As disclosed above, the CMS 220A-N may be configured to maintain cache metadata, such as cache tags 221, to represent data that has been admitted into the cache 216. The cache tags 221 may be maintained within memory resources of the virtual machine 208A-N, such that the cache tags 221 are transferred with the virtual machine between hosts (208A-N). In other embodiments, and as depicted in
The cache provisioner module 214 may be configured to dynamically provision cache resources to the virtual machines 208A-N. Cache allocation information associated with a particular virtual machine (e.g., Virtual Machine 208A) may be communicated to the corresponding virtual-machine CMS 220A-N via the I/O driver 218 and/or using another communication mechanism. In some embodiments, the cache provisioner module 214 is configured to maintain mappings between virtual machines 208A-N and respective cache storage locations allocated to the virtual machines 208A-N. The mappings may be used to secure cache data of the virtual machines 208A-N (e.g., by limiting access to the virtual machine 208A-N that is mapped to the cached data) and/or to provide for retaining and/or transferring cache data of one or more virtual machines 208A-N transferred from the host 202 to other, remote hosts.
The CMS 220A-N may be configured to maintain cache metadata, which may comprise cache tags 221A-N in accordance with the cache storage that has been allocated to the virtual machine 208A-N. The cache tags 221A-N may represent cache resources that have been allocated to a particular virtual machine 208A-N by the cache provisioner module 214. Cache tags that are “occupied” (e.g., are associated with valid cache data), may comprise mappings and/or associations between one or more identifiers of the data and corresponding cache resources. As used herein, an “identifier” of a cache tag 221A-N refers to an identifier used by the virtual machine 208A-N and/or storage client 104 to reference data that has been (or will be) stored in the cache 216. A cache tag identifier may include, but is not limited to: an address (e.g., a memory address, physical storage address, logical block address, etc., such as an address on the primary storage system 212), a name (e.g., file name, directory name, volume name, etc.), a logical identifier, a reference, or the like.
In some embodiments, the cache tags 221A-N represent a “working set” of a virtual machine 208A-N cache. As used herein, a “working set” of cache tags 221A-N refers to a set of cache tags corresponding to cache data that has been admitted and/or retained in the cache 216 by the CMS 220A-N through, inter alia, the application of one or more cache policies, such as cache admission policies, cache retention and/or eviction policies (e.g., cache aging metadata, cache steal metadata, least recently used (LRU), “hotness” and/or “coldness,” and so on), cache profiling information, file- and/or application-level knowledge, and the like. Accordingly, the working set of cache tags 221A-N may represent the set of cache data that provides optimal I/O performance for the virtual machine 208A-N under certain operating conditions.
In some embodiments, the CMS 220A-N may be configured to preserve a “snapshot” of the current cache state, which may comprise persisting the cache tags 221A-N (and/or related cache metadata) in a non-volatile storage medium, such as the primary storage system 212, persistent cache storage device (e.g., cache 216), or the like. A snapshot may comprise all or a subset of the cache metadata of the CMS 220A-N (e.g., cache state), which may include, but is not limited to: the cache tags 221A-N, related cache metadata, such as access metrics, and so on. In some embodiments, a snapshot may further comprise “pinning” data in the cache 216, which may cause data referenced by the one or more cache tags 221 to be retained in the cache 216. Alternatively, the snapshot may reference only the data identifiers (e.g., cache tags 221A-N), and may allow the underlying cache data to be removed and/or evicted from the cache 216.
The CMS 220A-N may be configured to load a snapshot from persistent storage, and to use the snapshot to populate the cache tags 221A-N. A snapshot may be loaded as part of an initialization operation (e.g., cache warm-up) and/or in response to configuration and/or user preference. For example, the CMS 220A-N may be configured to load different snapshots that are optimized for particular application(s) and/or service(s). Loading a snapshot may further comprise requesting cache storage from the cache provisioner module 214, as disclosed herein. In some embodiments, the CMS 220A-N may load a subset of a snapshot if the virtual machine 208A-N cannot allocate sufficient cache space for the full snapshot.
The CMS 220A-N may be further configured to retain the cache tags 221A-N in response to relocating and/or transferring the virtual machine 208A-N to another host 202. Retaining the cache tags 221 may comprise maintaining the cache tags 221A-N in the memory of the virtual machine 208A-N and/or not invalidating the cache tags 221A-N. Retaining the cache tags 221A-N may further comprise requesting cache storage from the cache provisioner module 214 of the destination host in accordance with the retained cache tags 221A-N, and/or selectively adding and/or removing cache tags 221A-N in response to being allocated more or less cache storage on the destination host. In some embodiments, the CMS 220A-N may retain the cache tags 221A-N despite the fact that the cache data referenced by the cache tags 221A-N does not exist in the cache 216 of the new destination host. As disclosed in further detail below, the cache virtualization module 213 may be configured to populate the cache 216 with cache data from a previous host 202 of the virtual machine 208A-N (e.g., via a network transfer), and/or from a shared, primary storage system 212.
The cache 216 may comprise one or more non-volatile storage resources, such as a solid-state storage device and/or a portion thereof. The cache storage module 213 may logically partition the cache 216 into multiple chunks. As used herein a “chunk” refers to an arbitrarily sized portion of cache storage capacity; the cache 216 may be divided into any number of chunks having any size. Each cache chunk may comprise a plurality of pages, each of which may comprise one or more storage units (e.g., sectors). In a particular embodiment, each chunk may comprise 256 MB (megabytes) of storage capacity; a 2 TB (terabyte) cache storage device 216 divided into 256 MB chunks may comprise 8384 chunks.
The cache provisioner module 214 may provision cache resources to virtual machines 208A-N based upon, inter alia, the cache requirements of the virtual machines 208A-N, availability of cache resources, and so on. The cache resources allocated to a particular virtual machine 208A-N may change over time in accordance with the operating conditions of the virtual machine 208A-N. The cache provisioner module 214 may provision cache chunks to a virtual machine 208A-N, which may determine the cache capacity of that virtual machine 208A-N. For example, if two 256 MB chunks are assigned to a specific virtual machine 208A-N, that virtual machine's cache capacity is 512 MB. The cache provisioner module 214 may be further configured to provision cache resources to other entities, such as the de-duplication cache 260 (e.g., cache resources 269).
In some embodiments, cache resources are provisioned using a “thin provisioning” approach. A thin provisioning approach may be used where the virtual machines 208A-N are configured to operate with fixed-size storage resources and/or when changes to the reported size of a storage resource would result in error condition(s). The cache storage device 216 may be represented within the virtual machines 208A-N as a fixed-size resource (e.g., through a virtual disk or other I/O interface, such as the I/O driver 218 of
The cache virtualization module 233 may comprise a cache interface module 223 configured to manage access to cache storage module 213 by the virtual machines 208A-N. The cache interface module 233 may provide one or more communication links and/or interfaces 124 through which the cache storage module 213 may service I/O requests for the virtual machines 208A-N (by use of the cache virtualization module 233), communicate configuration and/or allocation information, and so on. In some embodiments, the cache interface module 223 is configured to communicate with the virtual machines 208A-N through a virtual disk and/or using Virtual Logical Unit Number (VLUN) driver 215. The VLUN driver 215 may be further configured to provide a communication link 124 between the virtual machines 208A-N and the cache storage module 213.
In some embodiments, the VLUN driver 215 is configured to represent dynamically provisioned cache resources as fixed-size VLUN disks 235A-N within the virtual machines 208A-N. In an exemplary embodiment, the cache 216 may comprise 2 TB of storage capacity. The cache provisioner 214 may allocate four gigabytes (4 GB) to the virtual machine 208A, one gigabyte (1 GB) to virtual machine 208B, three gigabytes (3 GB) to virtual machine 208N, and so on. As disclosed above, other virtual machines 208B-N on the host 202 may be allocated different amounts of cache resources, in accordance with the I/O requirements of the virtual machines 208B-N and/or the availability of cache resources. The VLUN driver 215 and VLUN disk 235A-N may be configured to represent the entire capacity of the cache device 216 to the virtual machines 208A-N (e.g., 2 TB) regardless of the actual allocation to the particular virtual machine 208A-N by the cache provisioner module 214. In addition, and as disclosed in further detail below, the physical cache resources 224A-N allocated to the virtual machine 208A may be discontiguous within the physical address space of the cache 216. The cache storage module 213 may further comprise a map module 217 configured to present the cache resources allocated to the virtual machines 208A-N as a contiguous range of virtual cache addresses, regardless of the location of the underlying physical storage resources.
As disclosed above, the CMS 220A-N may comprise an I/O driver 218A-N configured to monitor and/or filter I/O requests of the corresponding virtual machine 208A-N. The I/O driver 218A-N may be configured to forward the I/O requests to the CMS 220A-N, which may selectively service the I/O requests by use of the cache storage module 213. The I/O driver 218A-N may comprise a storage driver, such as a Windows Driver, or other storage driver adapted for use an operating system and/or operating environments. The I/O driver 218A-N may be configured to monitor requests within an I/O and/or storage stack of the virtual machine 208A-N (e.g., 110 stack 111). In some embodiments, the I/O driver 218A-N may further comprise an I/O filter 219A-N configured to monitor and/or service I/O requests directed to primary storage system 212 (and/or other storage resources). I/O requests directed to the primary storage system 212 may be serviced directly at the primary storage system 212 (non-cached) or may be serviced using the cache storage module 213, as disclosed herein.
The I/O filter 219A-N may comprise a SCSI filter configured to manage data transfers between physical and virtual entities (e.g., primary storage system 212, VLUN disk 235A-N, and/or the cache storage module 213). The I/O filter 219A-N may be configured to identify the VLUN disk 235A-N within the virtual machine 208A-N, and manage capacity changes implemented by, inter alia, the cache provisioning module 214 (via the VLUN driver 215). As disclosed above, the VLUN disk 235A-N may be a virtual disk configured to represent dynamically allocated cache resources within the virtual machines 208A-N as fixed-size storage resources. The VLUN disk 235A-N may be configured to report a fixed storage capacity to the operating system of the virtual machine 208A-N rather than the actual, dynamic cache capacity allocated to the virtual machine 208A-N. Accordingly, the cache provisioner 214 may be configured to dynamically provision cache storage to/from the virtual machines 208A-N (through the VLUN disks 235A-N) without adversely affecting the virtual machines 208A-N.
As disclosed above, virtual machines 208A-N may be transferred between hosts 202, without powering down and/or resetting the virtual machine 208A-N. Such transfer operations may be simplified when the virtual machines 208A-N reference shared resources, since the virtual machines 208A-N will be able to access the same resources when transferred. However, virtual machines 208A-N that reference “local” resources (e.g., resources only available on the particular host), may be prevented from being transferred.
In the
The virtual machines 208A-N may be configured to emulate shared storage in other ways. For example, in some embodiments, the virtual machines 208A-N may be configured to replicate one or more “shared” VLUN disks across a plurality of hosts 202, such that, to the hosts, the VLUN disks appear to be shared devices. For instance, the VLUN disks may share the same serial number or other identifier. The host 202 and/or the virtualization kernel 210 may, therefore, treat the VLUN disks as shared devices, and allow virtual machines 208A-N to be transferred to/from the host 202. The VDMK approach disclosed above may provide advantages over this approach, however, since a smaller number of “shared” disks need to be created, which may prevent exhaustion of limited storage references (e.g., a virtual machine may be limited to referencing 256 storage devices).
The cache provisioner module 214 may report the actual physical cache storage allocated to the virtual machine 208A via a communication link 124. The communication link 124 may operate separately from I/O data traffic between the VLUN driver 215 and the I/O filter 219A-N. Thus, asynchronous, out-of-band messages may be sent between the VLUN driver 215 and the I/O filter 219A-N. The cache provisioner module 214 may use the communication path 124 to dynamically re-provision and/or reallocate cache resources between the virtual machines 208A-N (e.g., inform the virtual machines 208A-N of changes to cache resource allocations). The I/O driver 218A-N may report the allocation information to the CMS 220A-N, which may use the allocation information to determine the number of cache tags 221A-N available to the virtual machine 208A-N, and so on.
As disclosed above, the cache resources allocated to a virtual machine 208A-N may be represented by cache tags 221A-N. The cache tags 221A-N may comprise, inter alia, mappings between identifiers virtual machine 208A-N (e.g., data I/O addresses) and storage locations within the cache 216 (e.g., physical addresses of cache pages). A cache tag 221 may, therefore, comprise a translation and/or mapping layer between data identifiers and cache resources (e.g., a cache chunk, page, or the like). In some embodiments, cache tags 221A-N are configured to have a linear 1:1 correspondence with physical cache pages, such that each cache tag 221A-N represents a respective page within the cache 216. The cache tags 221A-N may be organized linearly in RAM or other memory within a computing device 102 (as in
Cache tags 221A-N may comprise cache metadata, which may include, but is not limited to: a next cache tag index, cache state, access metrics, checksum, valid map, a virtual machine identifier (VMID), and so on. The next tag index may comprise a link and/or reference to a next cache tag 221A-N. The cache state may indicate a current state of the cache tag 221A-N. As disclosed in further detail below, the state of a cache tag 221A-N may indicate whether the cache tag 221A-N corresponds to valid data, is dirty, and so on. The access metrics metadata may indicate usage characteristics of the cache tag 221A-N, such as a last access time, access frequency, and so on. A checksum may be used to ensure data integrity; the checksum may comprise a checksum of the cache data that corresponds to the cache tag 221A-N. The size of the checksum of the cache tags 221A-N may vary based on the size of the cache pages and/or the level of integrity desired (e.g., a user can obtain a higher level of integrity by increasing the size of the checksum). The valid unit metadata may identify portions of a cache page that comprise valid cache data. For example, a cache page may comprise a plurality of sectors, and the valid unit may indicate which sectors comprise valid cache data and which correspond to invalid and/or non-cached data.
In some embodiments, cache tags 221A-N may further comprise a VMID, which may be configured to identify the virtual machine 208A-N to which the cache tag 221A-N is allocated. Alternatively, ownership of the cache tag 221A-N may be determined without an explicit VMID. As depicted in
A cache tag 221A-N may be in one of a plurality of different states (as indicated by the cache tag state field of the cache tag 221A-N), which may include, but are not limited to: a free state, an invalid state, a valid state, a read pending state, a write pending state, and a depleted state. A cache tag 221A-N may be initialized to a free state, which indicates that the cache tag 221A-N is not currently in use. The cache tag 221A-N transitions from a free state to a write pending state in response to a cache write and/or cache read update operation (a write to the cache caused by a read miss or the like). The cache tag 221A-N transitions to a valid state in response to completion of the cache write. The cache tag 221 may revert to the write pending state in response to a subsequent write and/or modify operation. The cache tag 221A-N transitions to a read pending state in response to a request to read data of the cache tag, and reverts to the valid state in response to completion of the read. The cache tag 221A-N may transition to the invalid state in response to an attempt to perform a write operation while the cache tag 221A-N is in the read pending or write pending state. The cache tag 221A-N transitions from the invalid state to the free state in response to completing the write or read update. A cache tag 221A-N transitions to the depleted state in response to failure of a read or write operation (e.g., from the read pending or write pending state).
In some embodiments, cache tags 221A-N may further comprise a pinned state indicator. Cache tags 221A-N that are pinned may be protected from being evicted from the cache 216, allocated to another virtual machine 208A-N, or the like. Pinning cache tags 221A-N may also be used to lock a range of cache addresses. In certain situations, a portion of data associated with a read operation is available in the cache 216, but a portion is not available (or not valid), resulting in a partial cache hit. The CMS 220A-N may determine whether to retrieve all of the data from the primary storage system 212 or retrieve a portion from the cache 216 and the remainder from the primary storage system 212, which may involve more than one I/O to the primary storage system 212.
In some embodiments, cache tags 221A-N may further comprise respective log indicators. The log indicators may comprise a mapping and/or translation layer between the cache tags 221A-N and portions of the cache log. As disclosed in further detail herein, cache tags 221A-N may be associated with particular log intervals, sections, and/or periods. The log identifier field may be used to identify data to write back to the primary storage system 212 during log synchronization operations.
In some embodiments, the CMS 220A-N is configured to manage a partial cache miss to minimize the number of I/O requests forwarded on to the primary storage system 212. In addition to managing partial cache miss I/O requests, the CMS 220A-N mitigates the amount of fragmentation of I/Os to primary storage based on I/O characteristics of the I/O requests. Fragmentation of I/Os (also known as I/O splitting) refers to an I/O request that crosses a cache page boundary or is divided between data that resides in the cache and data that resides on the primary storage. The I/O characteristics may include whether the I/O is contiguous, the size of the I/O request, the relationship of the I/O request size to the cache page size, and the like. In effectively managing partial cache hits and fragmentation of I/O requests, the CMS 220A-N may coalesce I/O requests for non-contiguous address ranges and/or generate additional I/O requests to either the cache storage module 213 or the primary storage system 212.
The cache tag manager 242 may be configured to manage the cache tags 221 allocated to one or more virtual machines 208A-N, which may comprise maintaining associations between virtual machine identifiers (e.g., logical identifiers, address, etc.) and data in the cache 216. The cache tag manager 242 may be configured to dynamically add and/or remove cache tags 221 in response to allocation changes made by the cache provisioner module 214. In some embodiments, the cache tag manager 242 is configured to manage cache tags 221 of a plurality of different virtual machines 208A-N. The different sets of cache tags 221 may be maintained separately (e.g., within separate data structures and/or in different sets of cache tags 221) and/or in a single data structure.
The cache tag translation module 244 may be configured to correlate cache tag identifiers with cache storage locations (e.g., cache addresses, cache pages, etc.). In embodiments in which the CMS 220 is implemented within a bare metal computing environment 103 and/or virtual machine 208A-N (as depicted in
The log association module 245 may be configured to map cache tags 221 to corresponding portions of the cache log 320. As disclosed in further detail herein, the log association module 245 may be configured to associate cache tags 221 with respective sections, intervals, and/or portions of the cache log 320 by use of log identifiers (e.g., using a log identifier field within the cache tags 221). Accordingly, the log association module 245 (and log identifiers of the cache tags 221) may comprise a translation layer between the cache tags 221 and respective portions of the cache log 320.
The access metrics module 246 may be configured to determine and/or maintain cache access metrics using, inter alia, one or more clock hand sweep timers, or the like. The steal candidate module 248 may be configured to identify cache data and/or cache tags that are candidates for eviction based on access metrics and/or other cache policy (e.g., least recently used, staleness, sequentiality, etc.), or the like.
The cache page management module 250 may be configured to manage cache resources (e.g., cache page data) and related operations. The valid unit map module 252 may be configured to identify valid data stored in cache 216 and/or a primary storage system 212. The page size management module 254 may be configured to perform various page size analysis and adjustment operations to enhance cache performance, as disclosed herein. The interface module 256 may be configured to provide one or more interfaces to allow other components, devices, and/or systems to interact with the CMS 220, which may include, but is not limited to: modifying the number and/or extent of cache tags 221 allocated to a virtual machine 208A-N, querying and/or setting one or more configuration parameters of the CMS 220, accessing cache tags 221 (e.g., for a snapshot, checkpoint, or other operation), or the like.
The cache state retention module 257 may be configured to retain the portions of the cache state of the CMS 220, which may include the cache tags 221, de-duplication index (disclosed below), and so on, in response to transferring the virtual machine 208A-N to a different host. As disclosed above, the cache tags 221 may represent a working set of the cache of a particular virtual machine 208A-N, which may be developed through the use of one or more cache admission and/or eviction policies (e.g., the access metrics module 246, steal candidate module 248, and so on), in response to the I/O characteristics of the virtual machine 208, and/or the applications running on the virtual machine 208A-N.
The CMS 221 may develop and/or maintain a working set for the cache using inter alia a file system model. The cache 216 may comprise one or more solid-state storage devices, which may provide fast read operations, but relatively slow write and/or erase operations. These slow write operations can result in significant delay when initially developing the working set for the cache. Additionally, the solid-state storage devices comprising the cache 216 may have a limited lifetime (a limited number of write/erase cycles). After reaching the “write lifetime” of a solid-state storage device, portions of the device become unusable. These characteristics may be taken into consideration by the CMS 220 in making cache admission and/or eviction decisions.
The cache state transfer module 258 may be configured to transfer portions of the cache state of the virtual machine 208A-N between hosts 202 and/or to persistent storage (e.g., in a snapshot operation). The cache state transfer module 258 may comprise transferring cache tags 221 maintained in the virtualization kernel, to a remote host and/or non-volatile storage.
The cache tag snapshot module 259 may be configured to maintain one or more “snapshots” of the working set of the cache of a virtual machine 208A-N. As disclosed above, a snapshot refers to a set of cache tags 221 and/or related cache metadata at a particular time. The snapshot module 259 may be configured to store a snapshot of the cache tags 221 on a persistent storage medium and/or load a stored snapshot into the CMS 220.
The cache provisioner module 214 may be configured to maintain mappings between virtual machines and the cache resources allocated to the virtual machines 208A-N. The cache provisioner module 214 may implement mappings that can be dynamically changed to reallocate cache resources between various virtual machines 208A-N. The mappings may be further configured to allow the cache provisioner to represent dynamically allocated cache resources to the virtual machines 208A-N as contiguous ranges of “virtual cache resources,” independent of the underlying physical addresses of the cache 216.
As illustrated in
Referring to
In the
The map module 217 may be configured to map virtual cache resources (e.g., virtual cache addresses) 304 to physical cache resources in the physical address space 306 of the cache 216. In some embodiments, the map module 217 may comprise an “any-to-any” index of mappings between virtual cache addresses allocated to the virtual machines 208A-N and the physical cache addresses within the cache 216. Accordingly, the virtual cache addresses may be independent of the underlying physical addresses of the cache 216. The translation layer implemented by the map module 217 may allow cache tags 221A-N to operate within a contiguous virtual address space despite the fact that the underlying physical allocations 224A may be non-contiguous within the cache 216. Alternatively, in some embodiments, the mapping module 217 may be omitted, and the CMS 220A-N may be configured to directly manage physical cache addresses within the cache 216.
The map module 217 may be leveraged to secure data in the cache 216. In some embodiments, the cache storage module 213 may restrict access to data in the cache 216 to particular virtual machines 208A-N and/or may prevent read-before-write conditions. The cache provisioner module 214 may be configured to restrict access to physical cache chunks 302 to the virtual machine 208A-N to which the chunk 302 is allocated. For example, the cache chunk labeled VM-10 may only be accessible to the virtual machine 208A based on, inter alia, the mapping between VM-1 208A and the cache chunk VM-10 in the map module 217. Moreover, the indirect addressing of the map module 217 may prevent virtual machines 208A-N from directly referencing and/or addressing physical cache chunks 302 allocated to other virtual machines 208A-N.
As disclosed above, the cache storage module 213 may be configured to control access to data stored within the cache 216 by use of, inter alia, the cache provisioner module 214 and/or map module 217. In some embodiments, the CMS 220A-N and virtual machines 208A-N reference cache data by use of virtual cache addresses rather than physical addresses of the cache 216. Accordingly, the virtual machines 208A-N may be incapable of directly referencing the data of other virtual machines 208A-N. The cache provisioner module 214 may be further configured to allocate different, incompatible virtual cache addresses to different virtual machines 208A-N, such as virtual cache addresses in different, non-contiguous address ranges and/or address spaces. The use of different, incompatible ranges may prevent the virtual machines 208A-N from inadvertently (or intentionally) referencing virtual and/or physical cache resources of other virtual machines 208A-N.
Securing data may comprise preventing read-before-write conditions that may occur during dynamic cache resource provisioning. For example, a first virtual machine 208A may cache sensitive data within a cache chunk 302 that is dynamically reallocated to another virtual machine 208B. The cache storage module 213 may be configured to prevent the virtual machine 208B from reading data from the chunk 302 that were not written by the virtual machine 208B. In some embodiments, the cache provisioner 214 may be configured to erase cache chunks 302 in response to reassigning the chunks 302 to a different virtual machine 208A-N (or removing the association between a virtual machine 208A-N and the cache chunk 302). Erasure may not be efficient, however, due to the characteristics of the cache 216; erasing solid-state storage may take longer than other storage operations (100 to 1000 times longer than read and/or write operations), and may increase the wear on the storage medium. Accordingly, the cache storage module 213 may be configured to prevent read-before-write conditions in other ways. In some embodiments, for example, the cache storage module 213 may be configured to TRIM reallocated chunks 302 (e.g., logically invalidate the data stored on the chunks 302). Cache chunks 302 that are erased and/or invalidated prior to being reallocated may be referred to as “unused chunks.” By contrast, a chunk 302 comprising data of another virtual machine 208A-N (and not erased or TRIMed) is referred to as a “used” or “dirty chunk,” which may be monitored to prevent read-before-write security hazards.
Referring to
In the
Referring back to
The cache storage module 213 may comprise a cache virtualization module 233 configured to interface with (and/or expose caching services to) virtual machine 208A by use of the cache interface module 223, which may comprise representing cache resources as a VLUN disk 235A within the virtual machine 208A, monitoring I/O requests of the virtual machine 208A by use of the I/O driver 218A and/or filter 219A, and selectively servicing the monitored I/O requests by use of the cache storage module 213 (via the communication link 124). The standard virtual machines 208B-N may access cache services differently. In some embodiments, I/O requests of the virtual machines 208B-N are handled within a storage stack 211. The storage stack 211 may comprise an I/O framework of the host 202 and/or virtualization kernel 210. The storage stack 211 may define a storage architecture in which storage services, such as file system drivers, volume drivers, disk drivers, and the like, are deployed. Storage services may be configured to interoperate by issuing and/or consuming I/O requests within various layers of the I/O stack 211. The cache interface module 223 may comprise an I/O driver 218X and/or filter driver 219X configured to monitor I/O requests of the virtual machines 208B-N in the storage stack 211. Selected I/O requests of the virtual machines 208B-N may be serviced using the cache storage module 213.
The cache virtualization module 233 may comprise a CMS 220X operating within the host 202 and/or virtualization kernel 210. The I/O driver 218X and/or filter driver 219X may be configured to direct I/O requests of the virtual machines 208B-N to the CMS 220X, which may selectively service the I/O requests, as disclosed herein. The CMS 220X may be configured to maintain cache metadata for the virtual machines 208B-N, including, inter alia, cache tags 221B-N. In some embodiments, the CMS 220X maintains the cache tags 221B-N in a single data structure. Alternatively, the cache tags 221B-N may be maintained separately and/or may be managed by separate instances of the CMS 220X.
As disclosed above, the cache provisioner 214 may be configured to provision cache storage resources to the virtual machines 208A-N. The cache provisions 214 may be configured to dynamically re-provision and/or reallocate cache resources in accordance with user preferences, configuration, and/or I/O requirements of the virtual machines 208A-N. The virtual machines 208A-N may have different I/O requirements, which may change over time due to, inter alia, changes in operating conditions, usage characteristics and/or patterns, application behavior, and the like. The cache resources available to the virtual machines 208A-N may vary as well due to, inter alia, virtual machines 208A-N being migrated to and/or from the host 202, virtual machines 208A-N coming on-line, virtual machines 208A-N becoming inactive (e.g., shut down, suspended, etc.), or the like. The cache provisioner 214 may, therefore, be configured to adjust the allocation of cache resources in response to I/O requirements of particular virtual machines 208A-N and/or the I/O characteristics and/or I/O load on the host 202 (due to other virtual machines 208A-N, other processes and/or services running on the host 202, and so on).
As disclosed above, in some embodiments, the CMS 220A-N and/or cache storage module 213 may be configured to operate in a cache logging mode. In a cache logging mode, cache write operations may comprise writing data to the cache 216 and logging the write operation on a persistent cache log 320. The cache storage module 213 may be configured to acknowledge completion of the cache write operation (and corresponding I/O request) in response to storing a record of the write operation within the log 320.
The cache log module 313 may be configured to generate the log 320 of cache storage operations on a persistent, non-volatile storage device 316, such as a hard disk, solid-state storage device, or the like. The log 320 may comprise a record of an ordered sequence of storage operations performed on the cache 216. In some embodiments, the cache log module 313 is configured to generate a log 320 comprising a plurality of log entries 322, wherein each log entry 322 corresponds to one or more write operation(s) performed on the cache 216. Each entry 322 in the log 320 may comprise one or more data segments 324 and log metadata entries 325. The data segment 324 may comprise the data that was written into the cache 216 in the corresponding write operation. The log metadata 325 may comprise metadata pertaining to the cache storage operation, which may include, but is not limited to: an identifier of the data (e.g., logical identifier, logical address, block address, etc.), an identifier of the primary storage system 212 associated with the data, an address within the primary storage system 212 associated with the data, an identifier of the storage client 104 associated with the cache write operation, and so on. In some embodiments, the log metadata 325 may further comprise a virtual machine identifier, virtual machine disk identifier, and/or the like configured to identify the virtual machine 208A-N associated with the cache write operation. In some embodiments, and as disclosed in further detail below, the log metadata 325 may be further configured to reference a log segment, period, and/or interval associated with the entry 322. In some embodiments, an entry 322 may comprise data segments 324 and/or log metadata entries 325 corresponding to each of a plurality of cache write operations.
The cache log module 313 may be configured store the log 320 sequentially within a physical address space 416 of the non-volatile storage device 316. As used herein, the “physical address space” refers to a set of addressable storage locations within the non-volatile storage device 316. The physical address space 416 may comprise a series of sector addresses, cylinder-head-sector (CHS) addresses, page addresses, or the like. In the
The cache log module 313 may be configured to record cache entries 322 at sequential storage locations within the physical address space 416 of the non-volatile storage device 316. The sequential order of cache entries 322 within the physical address space 416 may correspond to the temporal order in which the corresponding cache storage operations represented by the entries 322 were received and/or performed. The sequential order of the cache entries 322 within the log 320 may, therefore, be independent of data identifier(s) and/or addressing information of the corresponding cache write operations. As such, logging cache storage operations sequentially within the log 320 may comprise converting “random” write operations (write operations pertaining to random, arbitrary physical addresses of the primary storage system 212) into a series of sequential write operations.
In some embodiments, the cache log module 313 is configured to sequentially append log entries 322 to the log 320 at a current append point 428 within the physical address space 416. After appending an entry 322 to the log 320, the append point 428 may be incremented to the next sequential physical address, and so on. The cache log module 313 may manage the physical address space 416 as a cycle; after appending an entry 322 at the last physical storage location X 417X, the append point 428 may reset back to the first physical storage location 0 417A. In the
The non-volatile storage device 316 may be capable of much higher write speeds for sequential operations as compared to write operations that are randomly distributed within the physical address space of the device 316. Therefore, storing log entries 322 sequentially within the physical address space 416 may allow the cache log 313 to perform write operations much more efficiently than random write operations performed in, inter alia, write-through and/or write-back cache modes. In some embodiments, for example, the non-volatile storage device 316 may comprise a hard disk capable of sequential writes at 200 MB/second or more. In other embodiments, the cache log module 313 may be configured to store log information on a solid-state storage device capable of sequential writes at 500 MB/second or more. Random write speeds for these types of storage devices may be significantly lower.
The cache log module 313 may comprise a synchronization module 317 configured to synchronize the cache to the primary storage system 212. As used herein, synchronizing refers to updating the primary storage system 212 in accordance with the cache storage operations represented in the log 320 (e.g., transferring “dirty” cache data from the cache 216 to the primary storage system 212 and/or other backing store). The synchronization module 317 may be configured to synchronize portions of the log 302 to the primary storage system 212, which may comprise “applying” or “committing” portions of the log 320 by, inter alia, implementing the cache write operations represented by the cache entries 322 on the primary storage system 212.
The sequential format of the log 320 disclosed above may be highly efficient for write operations, but may exhibit poor read performance due to, inter alia, overhead involved in identifying the storage location of particular cache entries 322 in the log 320, and so on. Therefore, in some embodiments, committing the log 320 may comprise accessing cache data from the cache 216 rather than the cache log 320. In some embodiments, committing the log 320 may comprise: a) accessing, within the cache 216, data corresponding to cache write operations recorded in the log 320, and b) writing the data to the primary storage system 212 (and/or other backing stores). As disclosed above, cache tags 221 may be correlated to the log 320 by use of respective cache tag indicators 424. In some embodiments, the cache tag indicators 424 may indicate whether the corresponding cache data is “dirty.” As used herein, “dirty” data refers to data that has been written and/or modified within the cache, but has not been written to the corresponding backing store (e.g., primary storage system 212); “clean” cache data refers to cache data that has been written to the backing store. Accordingly, accessing data corresponding to the cache write operations recorded in the log 320 may comprise accessing data associated with dirty cache tags 221 and writing the accessed data to the primary storage system 212. Committing the log 320 may further comprise updating the cache tags 221 to indicate that the data has been committed to the primary storage system 212 (e.g., set the cache tag indicator(s) 424 to clean) and/or updating the log 320 to indicate that the entries 322 therein have been committed.
As illustrated in
The cache log module 313 may be configured to associate each log segment 326A-N with respective log segment metadata 327A-N. The log segment metadata 327A-N may include a segment identifier 329A-N, which may determine an order of the log segments 326A-N. The segment identifiers 329A-N may comprise any suitable identifying information including, but not limited to: a time stamp, a sequence number, a logical clock value, a lamport clock value, a beacon value, or the like. The log segment metadata 327A-N may further include, but is not limited to: a synchronization indicator configured to indicate whether the log segment 326A-N has been committed, a discardability indicator configured to indicate whether the log segment 326A-N needs to be retained on the non-volatile storage device 316, and so on. As disclosed in further detail herein, after committing a log segment 326A-N, the log segment 326A-N may be marked as committed and/or discardable, which may allow the cache log module 313 to reuse the storage resource occupied by the log segment 326A-N on the non-volatile storage device 316. In some embodiments, the log segment metadata 327A-N is stored on the non-volatile storage device 316 (as a header to a log segment 326A-N).
As disclosed above, an operation to write data into the cache may comprise: a) writing the data to the cache 216, and b) appending a log entry 322 corresponding to the cache write operation to the log 320. The cache write operation (and corresponding I/O request) may be acknowledged as complete in response appending the log entry 322. As disclosed above, the data written into the cache 216 may be associated with cache metadata, such as respective cache tags 221. The cache tags 221 may include, inter alia, an identifier 420 of the data, such as a logical identifier, a logical address, a data identifier, a storage I/O address, the address of the data on a backing store (e.g., primary storage system 212), a storage location 422 of the data within the cache 216, and the like. The cache tags 221 may further comprise log indicators 424 configured to map the cache tags 221 to respective log segments 326A-N. The log indicator 424 may identify the log segment 326A-N comprising the entry 322 that corresponds to the cache write operation in which the data of the cache tag 221 was written to the cache 216. The log indicator 424 of a cache tag 221 may be updated when data of the cache tag is written to cache 216. The CMS 220 may be configured to determine the current segment identifier 329A-N, and to set the log indicator field 424 accordingly. Alternatively, or in addition, the cache log module 313 may be configured to publish the current segment identifier 329A-N to the CMS 220, which may apply the published segment identifier 329A-N in response to writing and/or updating data of the cache tags 221. As illustrated in
As disclosed above in conjunction with
The cache log module 313 may be configured to commit portions of the log 320 to the primary storage system 212 (and/or other backing stores). Committing a portion of the log 320 may comprise committing one or more log segments 326A-N and, as disclosed above, committing a log segment may comprise updating the primary storage system 212 in accordance with the write operations recorded within entries 322 of the one or more log segments 326A-N.
In the
As disclosed above, in some embodiments, storage clients 104 may repeatedly write data to the same addresses and/or identifiers. For example, a storage client 104 may repeatedly update certain portions of a file and/or database table. In the
The synchronization module 317 may be configured to combine multiple, redundant write operations to the same data identifier into a single write to the primary storage 412. In some embodiments, the synchronization operation comprises accessing data associated with the log indicator 329B of the log segment to be committed from the cache 216. In the
The CMS 220 may identify data associated with the log indicator 329A in reference to log indicators 424 of the cache tags 221 (e.g., by use of the of the log association module 245). In some embodiments, the cache tags 221 may be indexed by their respective log indicators 424. In some embodiments, the CMS 220 may comprise a separate log indicator index data structure 421 configured to provide efficient mappings between log indicators and corresponding cache tags 221. The log indicator index 421 may comprise a hash table, tree, or similar data structure. In some embodiments, the log indicator index 421 may comprise indirect references to the cache tags 221 (e.g., links, pointers, or the like) such that the contiguous memory layout of the cache tags 221 can be preserved.
As disclosed above, in the
The CMS 220 may be configured to read the corresponding data from the cache 216 (as if performing a read operation), and to provide the data to the synchronization module 317 (e.g., read the data at CA[0] and CA[X] respectively). Alternatively, the CMS 220 may be configured to provide the synchronization module 317 with the cache address of the identified cache tags 221 and the synchronization module 317 may perform the read operations(s) directly on the cache 216. The CMS 220 may be further configured to provide the synchronization module 317 with data identifier(s) of the identified cache tags 221, which may include a disk identifier and/or disk address of the data on a backing store, such as the primary storage system 212. The synchronization module 317 may use the data accessed from CA[0] and CA[X] (and the corresponding disk identifier and addressing information) obtained from the corresponding cache tags 221[0] and 221[X] to update 417 the primary storage 212. The update 417 may comprise issuing one or more storage requests to the primary storage system 212 (or other backing store) to write the data CA[0] and CA[X] at the appropriate address(es). As illustrated in
The synchronization module 317 may be further configured to mark the primary storage system 212 (and/or other backing stores) with persistent metadata 449. The persistent metadata 449 may be stored at one or more pre-determined storage locations within the primary storage system 212 (and/or other backing stores). The persistent metadata 449 may comprise a log indicator 329B that corresponds to the most recent cache log synchronization operation performed thereon. As illustrated in
The synchronization module 317 may be further configured to reclaim the log segment 326B after committing the log segment 326B to the primary storage system 212 (and/or other backing stores). Reclaiming the log segment 326B may comprise indicating that the contents of the log segment 326B no longer need to be retained within the log 320 and/or on the non-volatile storage device 316. In some embodiments, reclaiming the log segment 326B comprises updating metadata 327B of the log segment 326B to indicate that the log segment can be erased, overwritten, deallocated, or the like. Alternatively, or in addition, reclaiming the log segment 326B may comprise erasing the contents of the log segment 326B, deallocating the log segment 326B, and/or allowing the log segment 326B to be overwritten. In embodiments comprising a solid-state storage device 316 to store the log 320, reclaiming the log segment 326B may comprise issuing one or more TRIM hints, messages, and/or commands indicating that the log segment 326B no longer needs to be retained.
In some embodiments, the cache log 313 may be configured to preserve cache data until the data is committed to the primary storage system. For example, before the contents of the log segment 328B are committed the cache storage module 213 may receive a request to write data ID[0] (in a next log segment 326A). Performing the requested operation may comprise overwriting the data ID[0] in the cache 216 (and the cache tag 221[0]), such that the data ID[0] of entry 432A would differ from the data read from the cache 216. In some embodiments, the synchronization module 317 is configured to prevent such hazards by committing log segments 326A-N atomically, as the log segments 326A-N increment; the cache storage module 213 may be configured to block and/or stall incoming cache write requests while the synchronization module 317 commits the log segment 326B. In addition, the synchronization policy module 337 may be configured to schedule a synchronization operation when the current log segment 326B is filled and/or is to be incremented to a next log segment 326A.
In other embodiments, the cache log module 313 may be configured to allow the log 320 to include multiple, uncommitted log segments 326A-N, and may perform synchronization operations while other cache write operations continue. The cache storage module 213 may avoid hazard conditions by marking uncommitted cache tags 221 and the corresponding cache data as copy-on-write. As used herein, an “uncommitted” cache tag 221 refers to a cache tag 221 corresponding to data that has not been committed to the primary storage system 221. Cache operations that would overwrite a copy-on-write cache tag 221 may comprise allocating a new cache tag 221, and performing the write operation, while maintaining the original, uncommitted cache tag 221 and corresponding data in the cache 216. After the cache tag 221 is committed, it may be removed (along with the corresponding data in the cache 216).
The cache log module 313 may further comprise a recovery module 319. The recovery module 319 may be configured to perform one or more recovery operations in response to detecting a failure condition. As used herein, a failure condition may include, but is not limited to: loss of data in the cache 216, loss of cache metadata by the CMS 220 (e.g., cache tags 221) and/or virtual machines 208A-N, data corruption (e.g., uncorrectable errors), and the like. A failure condition may occur for any number of reasons including, but not limited to: a hardware fault, a software fault, power interruption, storage media failure, storage controller failure, media wear, poor operating conditions, an invalid shutdown, an invalid reboot, or the like. The recovery module 319 may comprise a recovery policy module 339 configured to detect such failure conditions, and in response, to initiate recovery operations by the recovery module 317. Accordingly, the recovery policy module 339 may be configured to monitor operating conditions of the cache storage module 213, cache 216, computing device 102, host 202, virtual machines 208A-N, virtualization kernel 210, and the like.
The recovery module 319 may be configured to synchronize the primary storage system 212 to the contents of the log 320, which may comprise: a) determining a synchronization state of the primary storage system 212 and/or identifying the set of log entries 322 that have not been committed to the primary storage system 322, and b) applying the log 320 to the primary storage system 212 in accordance with the determined synchronization state (e.g., committing the write operations corresponding to the identified entries 322). Determining the synchronization state may comprise determining the last log segment 326A-N that was committed to the primary storage system 212 (if any). The last log segment 326A-N committed to the primary storage system 212 may be determined by reference to, inter alia, the synchronization metadata 459 stored on the non-volatile storage device 316 (or other persistent storage) and/or the persistent metadata 449 stored on the primary storage system 212 itself. The recovery module 319 may be configured to commit the contents of the log in accordance with the determined synchronization state (e.g., starting from the last log segment 326A-N known to have been committed to the primary storage system 212 and continuing through the end of the log 320). The recovery operation may further comprise clearing the log 320 (removing and/or invalidating the contents of the log 320), and resuming cache logging operations, as disclosed herein.
The recovery module 319 may be configured to determine the synchronization state of the primary storage system 212 using, inter alia, persistent metadata 449 stored on the primary storage system 212 and/or synchronization metadata 459. As illustrated in
The recovery module 319 may, therefore, begin committing the contents of the log 320 immediately following the end of the log segment 326N. In the
In some embodiments, the recovery module 319 is configured to commit each log entry 322 accessed while traversing from the last committed log segment (segment 326N or log entry 432N) to the end of the log 320 (entry 430). Committing a log entry 322 may comprise reading the log metadata 325 to determine, inter alia, the backing store associated with the log entry 322 (e.g., primary storage system 212), an address and/or identifier of the data, and writing the data segment 324 of the log entry 322 to the identified backing store and/or address. Committing the log entries from 432N to 430 may comprise replaying the sequence of cache write operations recorded in the log 320. In the
As illustrated above, sequentially committing the write operations in the log 320 may comprise performing multiple, redundant write operations 467; data segments for ID[0] and ID[X] may be written three times each, when only single write operations for each is required to update the primary storage system 212 with the current version of data ID[0] and ID[X] (e.g., only entries 432A and 434A have to be applied). In some embodiments, the recovery module 319 may be configured to buffer write operations in a write queue 349 while the recovery module 319 is configured to defer the write operations until the traversal is complete. During the traversal, the recovery module 319 may access the entries 322 and queue corresponding write operations in a write buffer 349. The recovery module may remove queued write operations that would be made redundant and/or obviated by entries 322 encountered later in the log 320 (e.g., later entries that pertain to the same data identifier and/or address). After the traversal is complete, the recovery module 320 may implement the remaining write operations in the write buffer 349, such that the write operations 467 to the primary storage system 212 do not include multiple, redundant writes.
As indicated in
After performing the cache admission operations 477, the recovery module may commit the contents of the cache 216 to the primary storage system 212 (and/or other backing stores). Committing the contents of the cache 216 may comprise requesting, from the CMS 220, cache data of all of the cache tags 221 (as opposed to only the tags associated with one or more log segments 326A-N). Since data is committed 479 using the CMS 220 (and cache tags 221), the commit operations 479 may not include redundant write operations.
After committing the contents of the cache 216 to the primary storage system 212, the recovery module may be configured to clear the log 320, clear the persistent metadata 449 of the primary storage system (e.g., clear the log indicator), and/or clear the synchronization metadata 459, as disclosed above. The CMS 220 and cache storage module 213 may then resume normal logged cache operations, as disclosed herein.
The embodiments disclosed in conjunction with
Each virtual machine 208 may be assigned a respective VMID. The VMID may be assigned when the virtual machine 208 is instantiated on a host 202A-N (e.g., during an initialization and/or handshake protocol). The VMID may comprise a process identifier, thread identifier, or any other suitable identifier. In some embodiments, the VMID may uniquely identify the virtual machine 208 on a particular host 202A-N and/or within a group of hosts 202A-N. The VMID may comprise an identifier assigned by the virtualization kernel 210, hypervisor, host 202A-N, VMDK disk (VLUN disk 235A-N of
In some embodiments, one or more of the virtual machines 208A-N may be capable of being relocated and/or transferred between the hosts 202A-N. For example, a virtual machine 208X may be migrated from the host 202A to the host 202B. The cache storage module 213 and/or cache virtualization module 233 may be configured to migrate the cache state of the virtual machine 208X between hosts (e.g., from the host 202A to the host 202B). Migrating the cache state of the virtual machine 208X may comprise migrating cache metadata (e.g., cache tags 221X[A]) to the host 202B, migrating data of the virtual machine 208X that has been admitted into the cache 216A on the host 202A (cache data 224X[A]), and the like. Transferring the virtual machine 208X from host 202A to host 202B may comprise retaining the cache state of the virtual machine 208X in response to the virtual machine 208X being transferred from the host 202A and/or transferring portions of the cache state to the destination host 202B. Retaining and/or transferring the cache state of the virtual machine 208X may comprise retaining and/or transferring cache metadata (cache tags 221X[A]) and/or cache data 224X[A] of the virtual machine 208X.
In the
The cache tags 221X[A] may correspond to cache data 224X[A] stored in physical storage locations of the cache 216A (e.g., cache chunks 302 and/or pages 304). The cache data 224X[A] may be associated with identifiers of the cache tags 221X[A] and/or the VMID of the virtual machine 208X by a map module 217, as disclosed above. Transferring the virtual machine 208X to host 202B may comprise transferring a current operating state of the virtual machine 208X, including a current memory image or state of the virtual machine 208X (e.g., stack, heap, virtual memory contents, and so on). Accordingly, in the
As disclosed above, transferring the cache state of the virtual machine 208X may further comprise transferring the cache data 224X[A] to which the cache tags 221X[B] refer. Transferring the cache data 224X[A] may comprise retaining the cache data 224X[A] on the host 202A in response to the virtual machine 208X being transferred therefrom; requesting portions of the retained cache data 224X[A] from the host 202A; and/or transferring portions of the cache data 224X[A] between the hosts 202A and 202B. In some embodiments, the cache storage module 213A may comprise a retention module 528A, which may be configured to retain cache data 224X[A] of the virtual machine 208X after the virtual machine 208X is transferred from the host 202A. The cache data 224X[A] may be retained for a retention period and/or until the cache storage module 213A determines that the retained cache data 224X[A] is no longer needed. The retention module 528A may determine whether to retain the cache data 224X[A] (and/or determine the cache data retention period) based upon various retention policy considerations, including, but not limited to, availability of cache 216A, availability of cache 216B, relative importance of the retained cache data 224X[A] (as compared to cache requirements of other virtual machines 208), whether the cache data 224X[A] is available in the primary storage system 212 (or other backing store), a cache mode and/or persistence level of the cache data 224X[A], and so on.
The cache storage module 213B may comprise a cache transfer module 530B, which may be configured to access cache data 224X[A] of the virtual machine 208X at the previous host 202A. The cache transfer module 530B may be configured to identify the previous host 202A by use of the VMID (e.g., accessing a previous host identifier maintained by the virtual machine 208X), by interrogating the virtual machine 208X, querying the virtualization kernel 210B (or other entity), or the like. The cache transfer module 530B may use the host identifier and/or host addressing information request portions of the retained cache data 224X[A] from the host 202A via the network 105. In some embodiments, the cache transfer module 530B is configured to determine and/or derive a network address and/or network identifier (network name or reference) of the host 202A from the host identifier.
The cache storage module 213A may comprise a cache transfer module 530A that is configured to selectively provide access to retained cache data 224X[A] of the virtual machine 208X. In some embodiments, the cache transfer module 530A is configured to secure the retained cache data 224X[A]. For example, the cache transfer module 530A may be configured to verify that the requesting entity (e.g., the cache storage module 213B) is authorized to access the retained cache data 224X[A], which may comprise verifying that the virtual machine 208X has been deployed on the host 202B and/or verifying that requests for the retained cache data 224X[A] are authorized by the virtual machine 208X (or other authorizing entity). For example, the cache transfer module 530A may request a credential associated with the transferred virtual machine 208X, such as the VMID, or the like. Alternatively, or in addition, the cache transfer module 530A may implement a cryptographic verification, which may comprise verifying a signature generated by the transferred virtual machine 208X, or the like. The cache data 224X[A] may be transferred between the hosts 202A and 202B using various mechanisms, including, but not limited to: push transfers, demand paging transfers, prefetch transfers, bulk transfers, or the like. The cache storage module 531B at host 202B may be configured to selectively admit cache data 224X[A] transferred to the host 202B from host 202A into the cache 224X[B]. The cache storage module 531B may be further configured to populate the cache data 224X[B] from other sources, such as the primary storage system 212, other hosts 202N, or the like. The cache storage module 531B may be configured to associate the cache data 224X[B] with the identifiers of the retained cache tags 221X[B], such that the references in the retained cache tags 221X[B] remain valid per the mappings implemented by the map module 217. Further embodiments of systems and methods for transferring cache state are disclosed in U.S. patent application Ser. No. 13/687,979, entitled “Systems, Methods and Apparatus for Cache Transfers,” filed Nov. 28, 2012, and is hereby incorporated by reference.
The cache storage module 213A may comprise a cache log module 313B configured to log cache storage operations performed by the virtual machine 208X (and/or other virtual machines on the host 202A) within a persistent log 320A. Transferring the virtual machine 208X from the host 202A to host 202B may comprise transferring and/or managing the contents of the log 320A.
In some embodiments, the log 320A and the log 320B may be synchronized; the log 320B may comprise a logical or physical standby and/or clone of the log 320A (or vice versa). The cache log module 313A may be configured to log cache storage operations within both logs 320A and 320B (through and/or by use of the cache log module 313B); the cache storage module 213A may acknowledge completion of a write operation (and corresponding I/O request) in response to logging the write operation in both logs 320A and 320B. Alternatively, or in addition, the cache log module 313A may be configured to log cache write operations within a shared log 320X. The shared log 320X may be implemented on a persistent, network-accessible storage device, such as a NAS, SAN, or the like.
In some embodiments, the cache log module 313A and 313B may be configured to maintain separate logs 320A and 320B. Transferring the virtual machine 208X may comprise committing the contents of the log 320A, and resuming cache logging at the host 202B using the cache log module 313B after the transfer is complete. Alternatively, the cache transfer module 530A may be configured to transfer portions of the log 320A to host 202B as cache state data, as disclosed herein.
Step 610 may comprise receiving a request to write data that has been admitted into a cache. The request of step 610 may be issued by a CMS 220, which may be configured to operate within a bare metal operating environment, a virtual machine 208A-N, and/or a virtualization kernel 210 (e.g., hypervisor). Step 610 may be performed in response to a request to write data that is cached in the cache 216 and/or is associated with one or more cache tags 221 of the CMS 220.
Step 620 may comprise logging a cache write operation corresponding to the write request received at step 610. Step 620 may comprise storing an entry 322 corresponding to the cache write operation in a log 320 maintained on a non-volatile storage device 316. The log 320 may be written sequentially. Accordingly, Step 620 may comprise appending the entry 322 sequentially within a physical address space 416 of the non-volatile storage device 316 (e.g., sequentially at a current append point 328).
The entry 322 may comprise the data that is to be written to the cache 216 (data segment 324) and/or log metadata 325. As disclosed above, the log metadata 325 may include, but is not limited to: an identifier of the data (e.g., logical identifier, logical address, storage I/O address), a backing store identifier, a segment identifier 329A-N, a VMID, a VMDK identifier, and/or the like.
In some embodiments, step 620 comprises appending the entry 322 within a particular log segment 326A-N. Step 620 may further comprise providing an identifier of the particular log segment 326A-N to the CMS 220 to maintain an association between the log 320 and the corresponding cache tag 221 (e.g., using log indicator fields 424 of the cache tags 221, as disclosed herein). In some embodiments, the cache log module 313 may be configured to publish a current log segment identifier to the CMS 220 and/or virtual machines 208A-N. As disclosed above, due to variable I/O rates of different storage clients 104 and/or virtual machines 208A-N, the published log segment may differ from the actual log segment in which the entry 322 is stored. Step 620 may, therefore, comprise providing an updated log indicator value (e.g., sequence indicator) to the CMS 220 if needed.
Step 620 may further comprise performing the write operation to write the data into the cache 216, as disclosed above.
Step 630 may comprise acknowledging completion of the write request of step 610 in response to logging the write operation at step 620. The write request may be acknowledged without writing the data to the primary storage system 212 (and/or other backing store) and/or without writing the data to cache 216.
Step 722 comprises determining whether the identified write request pertains to data in the cache 216. Step 722 may comprise determining whether the CMS 220 includes a cache tag 221 corresponding to the write request. As disclosed above, the cache tags 221 may comprise a translation layer between data identifiers (logical identifiers, storage I/O addresses, etc.) and cache storage locations. Step 722 may comprise determining the storage I/O address of the write request (and/or other identifiers(s)) and determining whether the CMS 220 comprises a corresponding cache tag 221. If no cache tag 221 exists, step 722 may further comprise determining whether to admit the data into the cache using, inter alia, the admission module 247. If the data is to be admitted, step 722 may further comprise allocating one or more cache tags 221 for the write request, as disclosed above. If the write request pertains to data in the cache 216 and/or to data that is to be admitted into the cache 216, the flow may continue to step 750.
Step 750 may comprise logging a cache write operation corresponding to the identified write request. As disclosed above, step 750 may comprise appending an entry 322 at a sequential append point 328. The entry 322 may be associated with a particular log segment 326A-N. Step 750 may further comprise maintaining an association between a cache tag 211 associated with the cache write operation and the particular log segment 326A-N, as disclosed above.
Step 754 may comprise acknowledging completion of the write request in response to logging the cache write operation. The write request may be acknowledged before the write operation is written to the primary storage system 212 (or other backing store).
Step 810 may comprise logging the cache write operations sequentially within the physical address space 416 of the non-volatile storage device 316. Accordingly, step 810 may comprise converting a plurality of write operations to randomly distributed physical addresses and/or data identifiers into more efficient sequential storage operations.
Step 810 may further comprise maintaining mappings between cache tags 221 associated with the cache write operations and corresponding portions of the log 320. In some embodiments, cache logging comprises appending entries 322 within respective segments of the log (e.g., log segments 3326A-N). Step 810 may comprise associating cache tags 221 of the cache write operations with the respective segments using, inter alia, a log indicator field 424 of the cache tags 221, a cache tag index 421, and/or the like, as disclosed above.
Step 820 may comprise determining whether to commit the log 320 (and/or portions thereof). The determination of step 820 may be based on one or more operating and/or triggering conditions, as disclosed above. In some embodiments, the cache log module 313 is configured to commit a current log segment 326A-N in response to filling the log segment 326A-N, incrementing the current log segment 326A-N, or the like. Alternatively, or in addition, the determination of step 820 may be based on load conditions, log capacity thresholds, preferences, configuration, and/or the like. The flow may continue at step 830 in response to determining to commit the log 320 (and/or portion thereof).
Step 830 may comprise committing the log 320 and/or portion thereof. As disclosed above, committing the log may comprise updating the primary storage 212 (and/or other backing store) with cache data written to the log during one or more intervals and/or periods (e.g., within a particular log segment 326A-N). Step 830 may comprise a) determining a current synchronization state of the primary storage system 212, b) identifying portions of the log 320 to commit based on the current synchronization state, c) accessing cache data corresponding to the identified portions of the log 320, and d) writing the accessed cache data to the primary storage system 212. Determining the current synchronization state of the primary storage system 212 may comprise referencing persistent metadata 449 on the primary storage system 212 itself, synchronization metadata 459 maintained by the cache log module 313, or the like. Identifying portions of the log 320 to commit may comprise comparing an endpoint for the commit operation (e.g., up to a certain log segment 326A-N) to the synchronization state of the primary storage system 212. Referring to
Referring back to
Step 830 may further comprise writing the data accessed from the cache 216 to the primary storage system 212. The synchronization module 317 may be configured to identify the primary storage system 212 (and/or other backing store) associated with the cache data using, inter alia, metadata associated with the gathered cache tags 221, as disclosed above.
In some embodiments, 830 comprises updating synchronization metadata 459 and/or persistent metadata 449 on the primary storage system 212 (and/or other backing stores) to indicate that the identified portions of the log 320 were committed. Step 830 may further comprise erasing, invalidating, and/or reclaiming the committed portions of the log 320, as disclosed above.
Step 932 may comprise recovering the lost cache data by use of the log 320. Step 932 may comprise identifying a set of entries 322 in the log 320 corresponding to data that have not been written to the primary storage 212 (and/pr other backing store) and writing the data of the identified set of entries 322 to the primary storage 212. In some embodiments, step 932 may comprise a) determining a synchronization state of the primary storage system 212 (as disclosed above), b) identifying a starting point in the log 320 in accordance with the determined synchronization state (as disclosed above), and c) committing the log 320 from a starting point to an end of the log 320. Committing the log 320 may comprise traversing the log 320 from the starting point. The starting point may correspond to the synchronization state of the primary storage system 212 (and/or other backing store). The starting point may be the entry 322 that immediately follows the last portion of the log 320 that was committed to the primary storage system 212. Referring back to
In some embodiments, step 932 comprises committing the write operations recorded in the entries 322 as the recovery module 319 traverses the log 320. As disclosed above, performing write operations during the traversal may result in performing more write operations than are actually needed (e.g., performing multiple, redundant write operations). Accordingly, in some embodiments, the step 932 comprises buffering and/or queuing write operations while traversing the log 320, removing redundant and/or obviated write operations, and implementing the remaining operations after the traversal is complete.
Alternatively, or in addition, step 932 may comprise admitting data of the write operations into the cache. The data of the write operations may be admitted during traversal, which may result in redundant write operations being performed. In some embodiments, the recovery module is configured to queue and/or buffer the admission operations during the traversal (in a write buffer 349), remove redundant operations, and implement the remaining operations after the traversal is complete. The recovery module 319 may then commit the contents of the cache to the primary storage system 212, as disclosed above.
Step 932 may further comprise clearing the contents of the log 320, persistent metadata 449, and/or synchronization metadata 459, and resuming cache logging operations, as disclosed herein.
Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized are included in any single embodiment. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment. Thus, discussion of the features and advantages, and similar language, throughout this specification may, but does not necessarily, refer to the same embodiment.
The embodiments disclosed herein may involve a number of functions to be performed by a computer processor, such as a microprocessor. The microprocessor may be a specialized or dedicated microprocessor that is configured to perform particular tasks according to the disclosed embodiments, by executing machine-readable software code that defines the particular tasks of the embodiment. The microprocessor may also be configured to operate and communicate with other devices such as direct memory access modules, memory storage devices, Internet-related hardware, and other devices that relate to the transmission of data in accordance with various embodiments. The software code may be configured using software formats such as Java, C++, XML (Extensible Mark-up Language) and other languages that may be used to define functions that relate to operations of devices required to carry out the functional operations related to various embodiments.
Within the different types of devices, such as laptop or desktop computers, hand held devices with processors or processing logic, and also computer servers or other devices that utilize the embodiments disclosed herein, there exist different types of memory devices for storing and retrieving information while performing functions according to one or more disclosed embodiments. Cache memory devices are often included in such computers for use by the central processing unit as a convenient storage location for information that is frequently stored and retrieved. Similarly, a persistent memory is also frequently used with such computers for maintaining information that is frequently retrieved by the central processing unit, but that is not often altered within the persistent memory, unlike the cache memory. Main memory is also usually included for storing and retrieving larger amounts of information such as data and software applications configured to perform functions according to various embodiments when executed by the central processing unit. These memory devices may be configured as random access memory (RAM), static random access memory (SRAM), dynamic random access memory (DRAM), flash memory, and other memory storage devices that may be accessed by a central processing unit to store and retrieve information. During data storage and retrieval operations, these memory devices are transformed to have different states, such as different electrical charges, different magnetic polarity, and the like. Thus, systems and methods configured disclosed herein enable the physical transformation of these memory devices. Accordingly, the embodiments disclosed herein are directed to novel and useful systems and methods that, in one or more embodiments, are able to transform the memory device into a different state. The disclosure is not limited to any particular type of memory device, or any commonly used protocol for storing and retrieving information to and from these memory devices, respectively.
Embodiments of the systems and methods described herein facilitate the management of data input/output operations. Additionally, some embodiments may be used in conjunction with one or more conventional data management systems and methods, or conventional virtualized systems. For example, one embodiment may be used as an improvement of existing data management systems.
Although the components and modules illustrated herein are shown and described in a particular arrangement, the arrangement of components and modules may be altered to process data in a different manner. In other embodiments, one or more additional components or modules may be added to the described systems, and one or more components or modules may be removed from the described systems. Alternate embodiments may combine two or more of the described components or modules into a single component or module.
Number | Name | Date | Kind |
---|---|---|---|
4571674 | Hartung | Feb 1986 | A |
5043871 | Nishigaki et al. | Aug 1991 | A |
5193184 | Belsan et al. | Mar 1993 | A |
5261068 | Gaskins et al. | Nov 1993 | A |
5291496 | Andaleon et al. | Mar 1994 | A |
5307497 | Feigenbaum et al. | Apr 1994 | A |
5313475 | Cromer et al. | May 1994 | A |
5325509 | Lautzenheiser | Jun 1994 | A |
5392427 | Barrett et al. | Feb 1995 | A |
5404485 | Ban | Apr 1995 | A |
5434994 | Shaheen et al. | Jul 1995 | A |
5438671 | Miles | Aug 1995 | A |
5469555 | Ghosh et al. | Nov 1995 | A |
5499354 | Aschoff et al. | Mar 1996 | A |
5504882 | Chai | Apr 1996 | A |
5535399 | Blitz et al. | Jul 1996 | A |
5551003 | Mattson et al. | Aug 1996 | A |
5553261 | Hasbun et al. | Sep 1996 | A |
5559988 | Durante et al. | Sep 1996 | A |
5586291 | Lasker et al. | Dec 1996 | A |
5594883 | Pricer | Jan 1997 | A |
5596736 | Kerns | Jan 1997 | A |
5598370 | Nijima et al. | Jan 1997 | A |
5603001 | Sukegawa et al. | Feb 1997 | A |
5651133 | Burkes | Jul 1997 | A |
5680579 | Young et al. | Oct 1997 | A |
5682497 | Robinson | Oct 1997 | A |
5682499 | Bakke et al. | Oct 1997 | A |
5701434 | Nakagawa | Dec 1997 | A |
5734861 | Cohn et al. | Mar 1998 | A |
5745792 | Jost | Apr 1998 | A |
5754563 | White | May 1998 | A |
5757567 | Hetzler et al. | May 1998 | A |
5787486 | Chin et al. | Jul 1998 | A |
5802602 | Rahman et al. | Sep 1998 | A |
5809527 | Cooper et al. | Sep 1998 | A |
5809543 | Byers et al. | Sep 1998 | A |
5845313 | Estakhri et al. | Dec 1998 | A |
5845329 | Onishi et al. | Dec 1998 | A |
5860083 | Sukegawa | Jan 1999 | A |
5907856 | Estakhri et al. | May 1999 | A |
5924113 | Estakhri et al. | Jul 1999 | A |
5930815 | Estakhri et al. | Jul 1999 | A |
5957158 | Volz et al. | Sep 1999 | A |
5960462 | Solomon et al. | Sep 1999 | A |
5961660 | Capps, Jr. et al. | Oct 1999 | A |
6000019 | Dykstal et al. | Dec 1999 | A |
6014724 | Jennett | Jan 2000 | A |
6073232 | Kroeker et al. | Jun 2000 | A |
6075938 | Bugnion | Jun 2000 | A |
6101601 | Matthews et al. | Aug 2000 | A |
6115703 | Bireley et al. | Sep 2000 | A |
6128695 | Estakhri et al. | Oct 2000 | A |
6141249 | Estakhri et al. | Oct 2000 | A |
6145051 | Estakhri et al. | Nov 2000 | A |
6170039 | Kishida | Jan 2001 | B1 |
6170047 | Dye | Jan 2001 | B1 |
6172906 | Estakhri et al. | Jan 2001 | B1 |
6173381 | Dye | Jan 2001 | B1 |
6185654 | Van Doren | Feb 2001 | B1 |
6209088 | Reneris | Mar 2001 | B1 |
6223308 | Estakhri et al. | Apr 2001 | B1 |
6230234 | Estakhri et al. | May 2001 | B1 |
6236593 | Hong et al. | May 2001 | B1 |
6240040 | Akaogi et al. | May 2001 | B1 |
6256642 | Krueger et al. | Jul 2001 | B1 |
6266785 | McDowell | Jul 2001 | B1 |
6279069 | Robinson et al. | Aug 2001 | B1 |
6289413 | Rogers et al. | Sep 2001 | B1 |
6330688 | Brown | Dec 2001 | B1 |
6336174 | Li et al. | Jan 2002 | B1 |
6356986 | Solomon et al. | Mar 2002 | B1 |
6370631 | Dye | Apr 2002 | B1 |
6385710 | Goldman et al. | May 2002 | B1 |
6393513 | Estakhri et al. | May 2002 | B2 |
6404647 | Minne | Jun 2002 | B1 |
6412080 | Fleming et al. | Jun 2002 | B1 |
6418478 | Ignatius et al. | Jul 2002 | B1 |
6507883 | Bello et al. | Jan 2003 | B1 |
6507911 | Langford | Jan 2003 | B1 |
6516380 | Kenchammana-Hoskote et al. | Feb 2003 | B2 |
6523102 | Dye et al. | Feb 2003 | B1 |
6564285 | Mills et al. | May 2003 | B1 |
6567889 | DeKoning et al. | May 2003 | B1 |
6587915 | Kim | Jul 2003 | B1 |
6587937 | Jensen et al. | Jul 2003 | B1 |
6601211 | Norman | Jul 2003 | B1 |
6625685 | Cho et al. | Sep 2003 | B1 |
6629112 | Shank | Sep 2003 | B1 |
6658438 | Moore et al. | Dec 2003 | B1 |
6671757 | Multer et al. | Dec 2003 | B1 |
6675349 | Chen | Jan 2004 | B1 |
6715027 | Kim et al. | Mar 2004 | B2 |
6715046 | Shoham et al. | Mar 2004 | B1 |
6728851 | Estakhri et al. | Apr 2004 | B1 |
6742082 | Lango et al. | May 2004 | B1 |
6751155 | Gorobets | Jun 2004 | B2 |
6754774 | Gruner et al. | Jun 2004 | B2 |
6757800 | Estakhri et al. | Jun 2004 | B1 |
6766413 | Newman | Jul 2004 | B2 |
6775185 | Fujisawa et al. | Aug 2004 | B2 |
6779088 | Benveniste et al. | Aug 2004 | B1 |
6779094 | Selkirk et al. | Aug 2004 | B2 |
6785776 | Arimilli et al. | Aug 2004 | B2 |
6785785 | Piccirillo et al. | Aug 2004 | B2 |
6801979 | Estakhri | Oct 2004 | B1 |
6804755 | Selkirk et al. | Oct 2004 | B2 |
6877076 | Cho et al. | Apr 2005 | B1 |
6880049 | Gruner et al. | Apr 2005 | B2 |
6883069 | Yoshida | Apr 2005 | B2 |
6883079 | Priborsky | Apr 2005 | B1 |
6910170 | Choi et al. | Jun 2005 | B2 |
6912537 | Selkirk et al. | Jun 2005 | B2 |
6912618 | Estakhri et al. | Jun 2005 | B2 |
6925533 | Lewis | Aug 2005 | B2 |
6938133 | Johnson et al. | Aug 2005 | B2 |
6957158 | Hancock et al. | Oct 2005 | B1 |
6959369 | Ashton et al. | Oct 2005 | B1 |
6977599 | Widmer | Dec 2005 | B2 |
6978342 | Estakhri et al. | Dec 2005 | B1 |
6981070 | Luk et al. | Dec 2005 | B1 |
6996676 | Megiddo | Feb 2006 | B2 |
7010652 | Piccirillo et al. | Mar 2006 | B2 |
7010662 | Aasheim et al. | Mar 2006 | B2 |
7013376 | Hooper, III | Mar 2006 | B2 |
7013379 | Testardi | Mar 2006 | B1 |
7035974 | Shang | Apr 2006 | B2 |
7036040 | Nicholson et al. | Apr 2006 | B2 |
7043599 | Ware et al. | May 2006 | B1 |
7047366 | Ezra | May 2006 | B1 |
7050337 | Iwase et al. | May 2006 | B2 |
7058769 | Danilak | Jun 2006 | B1 |
7069393 | Miyata et al. | Jun 2006 | B2 |
7073028 | Lango et al. | Jul 2006 | B2 |
7076560 | Lango et al. | Jul 2006 | B1 |
7076599 | Aasheim et al. | Jul 2006 | B2 |
7076723 | Saliba | Jul 2006 | B2 |
7082495 | DeWhitt et al. | Jul 2006 | B2 |
7082512 | Aasheim et al. | Jul 2006 | B2 |
7085879 | Aasheim et al. | Aug 2006 | B2 |
7089391 | Geiger et al. | Aug 2006 | B2 |
7093101 | Aasheim et al. | Aug 2006 | B2 |
7096321 | Modha | Aug 2006 | B2 |
7111140 | Estakhri et al. | Sep 2006 | B2 |
7130956 | Rao | Oct 2006 | B2 |
7130957 | Rao | Oct 2006 | B2 |
7143228 | Lida et al. | Nov 2006 | B2 |
7149947 | MacLellan et al. | Dec 2006 | B1 |
7155531 | Lango et al. | Dec 2006 | B1 |
7167953 | Megiddo et al. | Jan 2007 | B2 |
7171536 | Chang et al. | Jan 2007 | B2 |
7173852 | Gorobets et al. | Feb 2007 | B2 |
7178081 | Lee et al. | Feb 2007 | B2 |
7181572 | Walmsley | Feb 2007 | B2 |
7194577 | Johnson et al. | Mar 2007 | B2 |
7194740 | Frank et al. | Mar 2007 | B1 |
7197657 | Tobias | Mar 2007 | B1 |
7203815 | Haswell | Apr 2007 | B2 |
7215580 | Gorobets | May 2007 | B2 |
7219238 | Saito et al. | May 2007 | B2 |
7234082 | Lai et al. | Jun 2007 | B2 |
7243203 | Scheuerlein | Jul 2007 | B2 |
7246179 | Camara et al. | Jul 2007 | B2 |
7254686 | Islam | Aug 2007 | B2 |
7260820 | Waldspurger et al. | Aug 2007 | B1 |
7275135 | Coulson | Sep 2007 | B2 |
7280536 | Testardi | Oct 2007 | B2 |
7293183 | Lee et al. | Nov 2007 | B2 |
7305520 | Voight et al. | Dec 2007 | B2 |
7337201 | Yellin et al. | Feb 2008 | B1 |
7340558 | Lee et al. | Mar 2008 | B2 |
7340566 | Voth et al. | Mar 2008 | B2 |
7356651 | Liu et al. | Apr 2008 | B2 |
7360015 | Matthews et al. | Apr 2008 | B2 |
7360037 | Higaki et al. | Apr 2008 | B2 |
7366808 | Kano et al. | Apr 2008 | B2 |
7392365 | Selkirk et al. | Jun 2008 | B2 |
7395384 | Sinclair et al. | Jul 2008 | B2 |
7398348 | Moore et al. | Jul 2008 | B2 |
7424593 | Estakhri et al. | Sep 2008 | B2 |
7437510 | Rosenbluth et al. | Oct 2008 | B2 |
7441090 | Estakhri et al. | Oct 2008 | B2 |
7447847 | Louie et al. | Nov 2008 | B2 |
7450420 | Sinclair et al. | Nov 2008 | B2 |
7464221 | Nakamura et al. | Dec 2008 | B2 |
7487235 | Andrews et al. | Feb 2009 | B2 |
7487320 | Bansai et al. | Feb 2009 | B2 |
7500000 | Groves et al. | Mar 2009 | B2 |
7516267 | Coulson et al. | Apr 2009 | B2 |
7526614 | van Riel | Apr 2009 | B2 |
7529905 | Sinclair | May 2009 | B2 |
7536491 | Kano et al. | May 2009 | B2 |
7549022 | Baker | Jun 2009 | B2 |
7552271 | Sinclair et al. | Jun 2009 | B2 |
7580287 | Aritome | Aug 2009 | B2 |
7603532 | Rajan et al. | Oct 2009 | B2 |
7610348 | Kisley et al. | Oct 2009 | B2 |
7617375 | Flemming et al. | Nov 2009 | B2 |
7620773 | Nicholson et al. | Nov 2009 | B2 |
7640390 | Iwamura et al. | Dec 2009 | B2 |
7644239 | Ergan et al. | Jan 2010 | B2 |
7660911 | McDaniel | Feb 2010 | B2 |
7660941 | Lee et al. | Feb 2010 | B2 |
7664239 | Groff et al. | Feb 2010 | B2 |
7669019 | Fujibayashi et al. | Feb 2010 | B2 |
7673108 | Iyengar et al. | Mar 2010 | B2 |
7676628 | Compton et al. | Mar 2010 | B1 |
7685367 | Ruia et al. | Mar 2010 | B2 |
7694065 | Petev et al. | Apr 2010 | B2 |
7702873 | Griess et al. | Apr 2010 | B2 |
7721047 | Dunshea et al. | May 2010 | B2 |
7721059 | Mylly et al. | May 2010 | B2 |
7725628 | Phan et al. | May 2010 | B1 |
7801894 | Bone et al. | Sep 2010 | B1 |
7805449 | Bone et al. | Sep 2010 | B1 |
7831783 | Pandit et al. | Nov 2010 | B2 |
7831977 | Shultz et al. | Nov 2010 | B2 |
7853772 | Chang et al. | Dec 2010 | B2 |
7873782 | Terry | Jan 2011 | B2 |
7873803 | Cheng | Jan 2011 | B2 |
7882305 | Moritoki | Feb 2011 | B2 |
7904647 | El-Batal et al. | Mar 2011 | B2 |
7913051 | Todd et al. | Mar 2011 | B1 |
7917803 | Stefanus et al. | Mar 2011 | B2 |
7941591 | Aviles | May 2011 | B2 |
7984230 | Nasu et al. | Jul 2011 | B2 |
8046526 | Yeh | Oct 2011 | B2 |
8055820 | Sebire | Nov 2011 | B2 |
8060683 | Shultz et al. | Nov 2011 | B2 |
8095764 | Bauer et al. | Jan 2012 | B1 |
8127103 | Kano et al. | Feb 2012 | B2 |
8135900 | Kunimatsu et al. | Mar 2012 | B2 |
8135904 | Lasser et al. | Mar 2012 | B2 |
8151077 | Bauer et al. | Apr 2012 | B1 |
8151082 | Flynn et al. | Apr 2012 | B2 |
8171201 | Edwards, Sr. | May 2012 | B1 |
8171204 | Chow et al. | May 2012 | B2 |
8195929 | Banga et al. | Jun 2012 | B2 |
8214583 | Sinclair et al. | Jul 2012 | B2 |
8244935 | Leventhal et al. | Aug 2012 | B2 |
8479294 | Li et al. | Jul 2013 | B1 |
8549222 | Kleiman et al. | Oct 2013 | B1 |
20020069317 | Chow et al. | Jun 2002 | A1 |
20020069318 | Chow et al. | Jun 2002 | A1 |
20020103819 | Duvillier | Aug 2002 | A1 |
20020161855 | Manczak et al. | Oct 2002 | A1 |
20020181134 | Bunker et al. | Dec 2002 | A1 |
20020188711 | Meyer et al. | Dec 2002 | A1 |
20020194451 | Mukaida et al. | Dec 2002 | A1 |
20030061296 | Craddock et al. | Mar 2003 | A1 |
20030061550 | Ng et al. | Mar 2003 | A1 |
20030093741 | Argon et al. | May 2003 | A1 |
20030140051 | Fujiwara et al. | Jul 2003 | A1 |
20030145230 | Chiu et al. | Jul 2003 | A1 |
20030149753 | Lamb | Aug 2003 | A1 |
20030198084 | Fujisawa et al. | Oct 2003 | A1 |
20040002942 | Pudipeddi et al. | Jan 2004 | A1 |
20040003002 | Adelmann | Jan 2004 | A1 |
20040049564 | Ng et al. | Mar 2004 | A1 |
20040093463 | Shang | May 2004 | A1 |
20040117586 | Estakhri et al. | Jun 2004 | A1 |
20040148360 | Mehra et al. | Jul 2004 | A1 |
20040153694 | Nicholson et al. | Aug 2004 | A1 |
20040186946 | Lee | Sep 2004 | A1 |
20040205177 | Levy et al. | Oct 2004 | A1 |
20040225837 | Lewis | Nov 2004 | A1 |
20040268359 | Hanes | Dec 2004 | A1 |
20050002263 | Iwase et al. | Jan 2005 | A1 |
20050015539 | Horii et al. | Jan 2005 | A1 |
20050027951 | Piccirillo et al. | Feb 2005 | A1 |
20050055425 | Lango et al. | Mar 2005 | A1 |
20050055497 | Estakhri et al. | Mar 2005 | A1 |
20050076107 | Goud et al. | Apr 2005 | A1 |
20050120177 | Black | Jun 2005 | A1 |
20050132259 | Emmot et al. | Jun 2005 | A1 |
20050141313 | Gorobets et al. | Jun 2005 | A1 |
20050144361 | Gonzalez et al. | Jun 2005 | A1 |
20050144406 | Chong, Jr. | Jun 2005 | A1 |
20050149618 | Cheng | Jul 2005 | A1 |
20050149683 | Chong, Jr. et al. | Jul 2005 | A1 |
20050149819 | Hwang | Jul 2005 | A1 |
20050177672 | Rao | Aug 2005 | A1 |
20050177687 | Rao | Aug 2005 | A1 |
20050193166 | Johnson et al. | Sep 2005 | A1 |
20050216653 | Aasheim et al. | Sep 2005 | A1 |
20050223005 | Shultz | Oct 2005 | A1 |
20050229090 | Shen et al. | Oct 2005 | A1 |
20050240713 | Wu et al. | Oct 2005 | A1 |
20050246510 | Retnamana et al. | Nov 2005 | A1 |
20050257017 | Yagi | Nov 2005 | A1 |
20050257213 | Chu et al. | Nov 2005 | A1 |
20050273476 | Wertheimer et al. | Dec 2005 | A1 |
20050276092 | Hansen et al. | Dec 2005 | A1 |
20060004955 | Ware et al. | Jan 2006 | A1 |
20060015769 | Ikeuchi | Jan 2006 | A1 |
20060020744 | Sinclair et al. | Jan 2006 | A1 |
20060026339 | Rostampour | Feb 2006 | A1 |
20060041731 | Jochemsen et al. | Feb 2006 | A1 |
20060053157 | Pitts | Mar 2006 | A1 |
20060059326 | Aasheim et al. | Mar 2006 | A1 |
20060075057 | Gildea et al. | Apr 2006 | A1 |
20060085626 | Roberson et al. | Apr 2006 | A1 |
20060090048 | Okumoto et al. | Apr 2006 | A1 |
20060106968 | Wooi Teoh | May 2006 | A1 |
20060117212 | Meyer et al. | Jun 2006 | A1 |
20060123197 | Dunshea et al. | Jun 2006 | A1 |
20060129778 | Clark et al. | Jun 2006 | A1 |
20060136657 | Rudelic et al. | Jun 2006 | A1 |
20060136685 | Griv et al. | Jun 2006 | A1 |
20060143396 | Cabot | Jun 2006 | A1 |
20060149893 | Barfuss et al. | Jul 2006 | A1 |
20060152981 | Ryu | Jul 2006 | A1 |
20060179263 | Song et al. | Aug 2006 | A1 |
20060184722 | Sinclair | Aug 2006 | A1 |
20060190552 | Henze et al. | Aug 2006 | A1 |
20060224849 | Islam et al. | Oct 2006 | A1 |
20060236061 | Koclanes | Oct 2006 | A1 |
20060248387 | Nicholson et al. | Nov 2006 | A1 |
20060265636 | Hummler | Nov 2006 | A1 |
20060271740 | Mark et al. | Nov 2006 | A1 |
20070006021 | Nicholson et al. | Jan 2007 | A1 |
20070016699 | Minami | Jan 2007 | A1 |
20070016754 | Testardi | Jan 2007 | A1 |
20070033325 | Sinclair | Feb 2007 | A1 |
20070033326 | Sinclair | Feb 2007 | A1 |
20070033327 | Sinclair | Feb 2007 | A1 |
20070033362 | Sinclair | Feb 2007 | A1 |
20070043900 | Yun | Feb 2007 | A1 |
20070050548 | Bali et al. | Mar 2007 | A1 |
20070050571 | Nakamura et al. | Mar 2007 | A1 |
20070061508 | Zweighaft | Mar 2007 | A1 |
20070069318 | Takeuchi et al. | Mar 2007 | A1 |
20070086260 | Sinclair | Apr 2007 | A1 |
20070088666 | Saito | Apr 2007 | A1 |
20070118676 | Kano et al. | May 2007 | A1 |
20070118713 | Guterman et al. | May 2007 | A1 |
20070124474 | Margulis | May 2007 | A1 |
20070124540 | van Riel | May 2007 | A1 |
20070136555 | Sinclair | Jun 2007 | A1 |
20070143532 | Gorobets et al. | Jun 2007 | A1 |
20070143560 | Gorobets | Jun 2007 | A1 |
20070143566 | Gorobets | Jun 2007 | A1 |
20070143567 | Gorobets | Jun 2007 | A1 |
20070147356 | Malas et al. | Jun 2007 | A1 |
20070150689 | Pandit et al. | Jun 2007 | A1 |
20070156998 | Gorobets | Jul 2007 | A1 |
20070168698 | Coulson et al. | Jul 2007 | A1 |
20070198770 | Horii et al. | Aug 2007 | A1 |
20070204128 | Lee et al. | Aug 2007 | A1 |
20070208790 | Reuter et al. | Sep 2007 | A1 |
20070214320 | Ruia et al. | Sep 2007 | A1 |
20070233455 | Zimmer et al. | Oct 2007 | A1 |
20070233937 | Coulson et al. | Oct 2007 | A1 |
20070250660 | Gill et al. | Oct 2007 | A1 |
20070260608 | Hertzberg et al. | Nov 2007 | A1 |
20070261030 | Wadhwa | Nov 2007 | A1 |
20070263514 | Iwata et al. | Nov 2007 | A1 |
20070266037 | Terry et al. | Nov 2007 | A1 |
20070271468 | McKenney et al. | Nov 2007 | A1 |
20070274150 | Gorobets | Nov 2007 | A1 |
20070276897 | Tameshige et al. | Nov 2007 | A1 |
20070294319 | Mankad | Dec 2007 | A1 |
20070300008 | Rogers et al. | Dec 2007 | A1 |
20080005748 | Mathew et al. | Jan 2008 | A1 |
20080010395 | Mylly et al. | Jan 2008 | A1 |
20080043769 | Hirai | Feb 2008 | A1 |
20080052377 | Light | Feb 2008 | A1 |
20080052477 | Lee et al. | Feb 2008 | A1 |
20080059752 | Serizawa | Mar 2008 | A1 |
20080091876 | Fujibayashi et al. | Apr 2008 | A1 |
20080098159 | Song | Apr 2008 | A1 |
20080104321 | Kamisetty et al. | May 2008 | A1 |
20080109090 | Esmaili et al. | May 2008 | A1 |
20080120469 | Kornegay | May 2008 | A1 |
20080126507 | Wilkinson | May 2008 | A1 |
20080126700 | El-Batal et al. | May 2008 | A1 |
20080126852 | Brandyberry et al. | May 2008 | A1 |
20080133963 | Katano et al. | Jun 2008 | A1 |
20080137658 | Wang | Jun 2008 | A1 |
20080140737 | Garst et al. | Jun 2008 | A1 |
20080140819 | Bailey et al. | Jun 2008 | A1 |
20080205286 | Li et al. | Aug 2008 | A1 |
20080229045 | Qi | Sep 2008 | A1 |
20080235443 | Chow et al. | Sep 2008 | A1 |
20080243966 | Croisettier et al. | Oct 2008 | A1 |
20080263259 | Sadovsky et al. | Oct 2008 | A1 |
20080263305 | Shu et al. | Oct 2008 | A1 |
20080263569 | Shu et al. | Oct 2008 | A1 |
20080271039 | Rolia et al. | Oct 2008 | A1 |
20080276040 | Moritoki | Nov 2008 | A1 |
20080294846 | Bali | Nov 2008 | A1 |
20080294847 | Maruyama et al. | Nov 2008 | A1 |
20080307160 | Humlicek | Dec 2008 | A1 |
20080307414 | Alpern et al. | Dec 2008 | A1 |
20090070526 | Tetrick | Mar 2009 | A1 |
20090083478 | Kunimatsu et al. | Mar 2009 | A1 |
20090083485 | Cheng | Mar 2009 | A1 |
20090089485 | Yeh | Apr 2009 | A1 |
20090103203 | Yoshida | Apr 2009 | A1 |
20090125650 | Sebire | May 2009 | A1 |
20090125700 | Kisel | May 2009 | A1 |
20090132621 | Jensen et al. | May 2009 | A1 |
20090150599 | Bennett | Jun 2009 | A1 |
20090150605 | Flynn et al. | Jun 2009 | A1 |
20090150641 | Flynn et al. | Jun 2009 | A1 |
20090228637 | Moon | Sep 2009 | A1 |
20090248763 | Rajan et al. | Oct 2009 | A1 |
20090248922 | Hinohara et al. | Oct 2009 | A1 |
20090276588 | Murase | Nov 2009 | A1 |
20090276654 | Butterworth | Nov 2009 | A1 |
20090279556 | Selitser et al. | Nov 2009 | A1 |
20090287887 | Matsuki et al. | Nov 2009 | A1 |
20090292861 | Kanevsky et al. | Nov 2009 | A1 |
20090300277 | Jeddoloh | Dec 2009 | A1 |
20090307424 | Galloway et al. | Dec 2009 | A1 |
20090313453 | Stefanus et al. | Dec 2009 | A1 |
20090327602 | Moore et al. | Dec 2009 | A1 |
20090327804 | Moshayedi | Dec 2009 | A1 |
20100005072 | Pitts | Jan 2010 | A1 |
20100005228 | Fukutomi et al. | Jan 2010 | A1 |
20100017556 | Chin | Jan 2010 | A1 |
20100017568 | Wadhawan et al. | Jan 2010 | A1 |
20100023674 | Aviles | Jan 2010 | A1 |
20100023676 | Moon | Jan 2010 | A1 |
20100023682 | Lee et al. | Jan 2010 | A1 |
20100030946 | Kano et al. | Feb 2010 | A1 |
20100036840 | Pitts | Feb 2010 | A1 |
20100042805 | Recio et al. | Feb 2010 | A1 |
20100070701 | Iyigun et al. | Mar 2010 | A1 |
20100070725 | Prahlad et al. | Mar 2010 | A1 |
20100070747 | Iyigun et al. | Mar 2010 | A1 |
20100070982 | Pitts | Mar 2010 | A1 |
20100076936 | Rajan | Mar 2010 | A1 |
20100077194 | Zhao et al. | Mar 2010 | A1 |
20100082774 | Pitts | Apr 2010 | A1 |
20100095059 | Kisley et al. | Apr 2010 | A1 |
20100169542 | Sinclair | Jul 2010 | A1 |
20100174867 | Gill | Jul 2010 | A1 |
20100205231 | Cousins | Aug 2010 | A1 |
20100205335 | Phan et al. | Aug 2010 | A1 |
20100211737 | Flynn et al. | Aug 2010 | A1 |
20100217916 | Gao et al. | Aug 2010 | A1 |
20100228903 | Chandrasek et al. | Sep 2010 | A1 |
20100235597 | Arakawa | Sep 2010 | A1 |
20100246251 | Chen | Sep 2010 | A1 |
20100262738 | Swing et al. | Oct 2010 | A1 |
20100262740 | Borchers et al. | Oct 2010 | A1 |
20100262757 | Sprinkle et al. | Oct 2010 | A1 |
20100262758 | Swing et al. | Oct 2010 | A1 |
20100262759 | Borchers et al. | Oct 2010 | A1 |
20100262760 | Swing et al. | Oct 2010 | A1 |
20100262761 | Borchers et al. | Oct 2010 | A1 |
20100262762 | Borchers et al. | Oct 2010 | A1 |
20100262766 | Sprinkle et al. | Oct 2010 | A1 |
20100262767 | Borchers et al. | Oct 2010 | A1 |
20100262773 | Borchers et al. | Oct 2010 | A1 |
20100262894 | Swing et al. | Oct 2010 | A1 |
20100262979 | Borchers et al. | Oct 2010 | A1 |
20110022819 | Post et al. | Jan 2011 | A1 |
20110107033 | Grigoriev et al. | May 2011 | A1 |
20110153951 | Strumpen | Jun 2011 | A1 |
20110179162 | Mayo et al. | Jul 2011 | A1 |
20110225342 | Sharma et al. | Sep 2011 | A1 |
20110231857 | Zaroo et al. | Sep 2011 | A1 |
20110238546 | Certain et al. | Sep 2011 | A1 |
20110265083 | Davis | Oct 2011 | A1 |
20110289267 | Flynn | Nov 2011 | A1 |
20110314202 | Iyigun et al. | Dec 2011 | A1 |
20110320733 | Sanford et al. | Dec 2011 | A1 |
20120036325 | Mashtizadeh | Feb 2012 | A1 |
20120117562 | Jess | May 2012 | A1 |
20120131265 | Koltsidas | May 2012 | A1 |
20120159081 | Agrawal et al. | Jun 2012 | A1 |
20120173824 | Iyigun et al. | Jul 2012 | A1 |
20120254824 | Bansod | Oct 2012 | A1 |
20120272240 | Starks | Oct 2012 | A1 |
20120278588 | Adams et al. | Nov 2012 | A1 |
20120289258 | Hofstaedter | Nov 2012 | A1 |
20120324183 | Chiruvolu | Dec 2012 | A1 |
20130042156 | Srinivasan | Feb 2013 | A1 |
20130185508 | Talagala | Jul 2013 | A1 |
20130191601 | Peterson | Jul 2013 | A1 |
20130232303 | Quan | Sep 2013 | A1 |
20130263119 | Pissay et al. | Oct 2013 | A1 |
20130268719 | Dover | Oct 2013 | A1 |
20130275391 | Batwara | Oct 2013 | A1 |
20130318283 | Small | Nov 2013 | A1 |
20130326152 | Loaiza | Dec 2013 | A1 |
20130339958 | Droste et al. | Dec 2013 | A1 |
20140136872 | Cooper et al. | May 2014 | A1 |
20140156910 | Uttamchandani et al. | Jun 2014 | A1 |
20140156938 | Galchev et al. | Jun 2014 | A1 |
20150178119 | Lee | Jun 2015 | A1 |
Number | Date | Country |
---|---|---|
1771495 | May 2006 | CN |
1100001 | May 2001 | EP |
1418502 | May 2004 | EP |
1814039 | Mar 2009 | EP |
123416 | Sep 2001 | GB |
4242848 | Aug 1992 | JP |
8153014 | Jun 1996 | JP |
200259525 | Sep 2000 | JP |
2009122850 | Jun 2009 | JP |
WO9419746 | Sep 1994 | WO |
WO9518407 | Jul 1995 | WO |
WO9612225 | Apr 1996 | WO |
WO0131512 | May 2001 | WO |
WO0201365 | Jan 2002 | WO |
WO2004061645 | Mar 2004 | WO |
WO2004099989 | Nov 2004 | WO |
WO2005103878 | Nov 2005 | WO |
WO2006062511 | Jun 2006 | WO |
WO2006065626 | Jun 2006 | WO |
WO2008130799 | Mar 2008 | WO |
WO2008073421 | Jun 2008 | WO |
WO2011106394 | Sep 2011 | WO |
Entry |
---|
Actel, “Actel Fusion FPGAs Supporting Intelligent Peripheral Management Interface (IPMI) Applications,” http://www.actel.com/documents/Fusion—IPMI—AN.pdf, Oct. 1, 2006, visited Mar. 11, 2010. |
Adabas, Adabas Caching ASSO, DATA, WORK, http://communities.softw areag.com/web/guest/pwiki/-/wiki/Main/.../pop—up?—36—viewMode=print, Oct. 2008, accessed Aug. 3, 2012. |
Adabas, Adabas Caching Configuration and Tuning, http://documentation.softwareag.com/adabas/ada821mfr/addons/acf/config/cfgover.htm, Sep. 2009, accessed Aug. 3, 2012. |
Adabas, Adabas Caching Facility, http://www.softwareag.com/es/Images/Adabas—Caching—Facility—tcm24-71167.pdf, 2008, accessed Aug. 3, 2012. |
Adabas, File Level Caching, http://documentation.softwareag.com/adabas/ada824mfr/addons/acf/services/file-level-caching.htm, accessed Aug. 3, 2012. |
Agigatech, Bulletproof Memory for RAID Servers, Part 1, http://agigatech.com/blog/bulletproof-memory-for-raid-servers-part-1/, last visited Feb. 16, 2010. |
Anonymous, “Method for Fault Tolerance in Nonvolatile Storage”, http://ip.com, IP.com No. IPCOM000042269D, 2005. |
Ari, “Performance Boosting and Workload Isolation in Storage Area Networks with SanCache,” Hewlett Packard Laboratories, Proceedings of the 23rd IEEE / 14th SA Goddard Conference on Mass Storage Systems and Technologies (MSST 2006), May 2006, pp. 263-27. |
Arpaci-Dusseau, “Removing the Costs of Indirection in Flash-based SSDs with Nameless Writes,” Jun. 2010, HotStorage'10, Boston, MA. |
Asine, “ASPMC-660 Rugged IDE Flash Drive PMC Module,” http://www.asinegroup.com/products/aspmc660.html, copyright 2002, visited Nov. 8, 2009. |
Atlantis Computing Technology, Caching, http://atlantiscomputing.com/technology/caching, published 2012, accessed Aug. 1, 2012. |
Bandulet “Object-Based Storage Devices,” Jul. 2007 http://developers.sun.com/solaris/articles/osd.htme, visited Dec. 1, 2011. |
Barrall et al., U.S. Appl. No. 60/625,495, “Dynamically Expandable and Contractible Fault-Tolerant Storage System Permitting Variously Sized Storage Devices and Method,” filed Nov. 5, 2004. |
Barrall et al., U.S. Appl. No. 60/718,768, “Dynamically Adaptable Fault-Tolerant Storage System,” filed Sep. 20, 2005. |
BiTMICRO, “BiTMICRO Introduces E-Disk PMC Flash Disk Module at Military & aerospace Electronics East 2004,” http://www. bitmicro.com/press.sub, published May 18, 2004, visited Mar. 8, 2011. |
Bonnet, “Flash Device Support for Database Management,” published Jan. 9, 2011. |
Brandon, Jr., “Sparse Matrices in CS Education,” Journal of Computing Sciences in Colleges, vol. 24 Issue 5, May 2009, pp. 93-98. |
Casey, “San Cache: SSD in the San, ”Storage Inc., http://www.solidata.com/resourses/pdf/storageing.pdf, 2000, visited May 20, 2011. |
Casey, “Solid State File-Caching for Performance and Scalability,” SolidData Quarter 1 2000, http://www/storagesearch. com/3dram.html visited May 20, 2011. |
Citrix, XenServer-6.0.0 Installation Guide, Mar. 2, 2012, http://support.citrix.com/servlet/KbServlet/download/28750-102-673824/XenServer-6.0.0-installation.pdf. accessed Aug. 3, 2012. |
Clustered Storage Solutions: “Products,” http://www.clusteredstorage.com/clustered—storage—solutions.HTML, last visited Feb. 16, 2010. |
Coburn, “NV-Heaps: Making Persistent Objects Fast and Safe with Next-Generation, Non-Volatile Memories”, ACM 978-1-4503-0266-1/11/0, published Mar. 5, 2011. |
Data Direct Networks, “White Paper: S2A9550 Overview,” www.//datadirectnet. com, 2007. |
EEEL-6892, Lecture 18, “Virtual Computers,” Mar. 2010. |
ELNEC, “NAND Flash Memories and Programming NAND Flash Memories Using ELNEC Device Programmers, Application Note,” published Mar. 1, 2007. |
Ferber, Christian, “XenDesktop and local storage + IntelliCache,” Jun. 22, 2011, blogs.citrix.com/2011/06/22/xendesktop-and-local-storage-intellicache/, accessed Aug. 3, 2012. |
Friedman, Mark, et al., “File Cache Performance and Tuning, Windows 2000 Performance Guide, O'Reilly & Associates, Inc., http://msdn.microsoft.com/en-us/ library/ms369863.aspx,” published Jan. 2002, visited Aug. 3, 2012. |
Gal, “A Transactional Flash File System for Microcontrollers,” 2005 USENIX Annual Technical Conference, published Apr. 10, 2009. |
Garfinkel, “One Big File Is Not Enough: A Critical Evaluation of the Dominant Free-Space Sanitization Technique,” 6th Workshop on Privacy Enhancing Technologies. Cambridge, United Kingdom, published Jun. 1, 2006. |
Gill, “WOW: Wise Ordering for Writes—Combining Spatial and Temporal Locality in Non-Volatile Caches,” IBM, Fast 05: 4th USENIX Conference on File and Storage Technologies, 2005. |
Gutmann, “Secure Deletion of Data from Magnetic and Solid-State Memory”, Usenix, 14 pages, San Jose, CA, published Jul. 1, 1996. |
Huffman, “Non-Volatile Memory Host Controller Interface,” Apr. 14, 2008, 65 pgs. |
Hynix Semiconductor, Intel Corporation, Micron Technology, Inc. Phison Electronics Corp., Sony Corporation, Spansion, Stmicroelectronics, “Open NAND Flash Interface Specification,” Revision 2.0, Feb. 27, 2008. |
Hystor “Making SSDs the Survival of the Fittest in High-Performance Storage Systems,” ics10-Paper 102, Feb. 2010. |
IBM, “Method to Improve Reliability of SSD Arrays,” Nov. 2009. |
Information Technology, “SCSI Object-Based Storage Device Commands,” 2 (OSD-2), Project T10/1729-D, Revision 4, published Jul. 30, 2004, printed Jul. 24, 2008. |
Intel, “Non-Volatile Memory Host Controller Interface (NVMHCI) 1.0,” Apr. 14, 2008. |
Johnson, “An Introduction to Block Device Drivers,” Jan. 1, 1995. |
Kawaguchi, “A Flash-Memory Based File System,” TCON'95 Proceedings of the USENIX 1995 Technical Conference Proceedings, p. 13. |
Linn, Craig, “Windows I/O Performance: Cache Manager and File System Considerations,” CMGA Proceedings, Sep. 6, 2006. |
Lu, Pin, “Virtual Machine Memory Access Tracing with Hypervisor Exclusive Cache,” Departmentn of Computer Science, University of Rochester, 2007. |
Mesnier, “Object-Based Storage,” IEEE Communications Magazine, Aug. 2003, pp. 84-90. |
Micron Technology, Inc., “NAND Flash 101: An Introduction to ND Flash and How to Design It in to Your Next Product (TN-29-19),” http://www.micron.com/˜/media/Documents/Products/Technical%20Note/ND%20Flash/145tn2919—nd—101.pdf, 2006, visited May 10, 2010. |
Micron, TN-29-08: Technical Note, “Hamming Codes for NAND Flash Memory Devices,” Mar. 10, 2010. |
Micron, “TN-29-17: NAND Flash Design and Use Considerations,” Mar. 10, 2010. |
Micron, “TN-29-42: Wear-Leveling Techniques in NAND Flash Devices,” Mar. 10, 2010. |
Microsoft, Data Set Management Commands Proposal for ATA8-ACS2, published Oct. 5, 2007, Rev. 3. |
Microsoft, “File Cache Management, Windows Embedded CE6.0 R3,” msdn.microsoft.com/en-us/subscriptions/aa911545.aspx, published Aug. 28, 2008. |
Microsoft, “Filter Driver Development Guide,” download.microsoft.com/.../FilterDriverDeveloperGuide.doc 2004. |
Microsoft, “How NTFS Works,” Apr. 9, 2010. |
Morgenstern, David, “Is There a Flash Memory RAID in your Future?”, http://www.eweek.com—eWeek, Ziff Davis Enterprise Holdings Inc., Nov. 8, 2006, visited Mar. 18, 2010. |
Muntz, et al., Multi-level Caching in Distributed File Systems, CITI Technical Report, 91-3, Aug. 16, 1991. |
Nevex Virtual Technologies, “CacheWorks Data Sheet,” http:// www.nevex.com/wp-content/uploads/2010/12/Data-Sheet3.pdf, published Dec. 1, 2010. |
Noll, Albert et al., Cell VM: A Homogeneous Virtual Machine Runtime System for a Heterogeneous Single-Chip. |
Novell, “File System Primer”, http://wiki.novell.com/index.php/File—System—Primer, 2006, visited Oct. 18, 2006. |
Omesh Tickoo et al, Modeling Virtual Machine Performance: Challenges and Approaches, SIGMETRICS Perform. Eval. Rev. 37, 3 (Jan. 2010), 55-60. DOI=10.1145/1710115.1710126 http://doi.acm.org/10.1145/ 1710115.1710126. |
Perfectcacheserver, “Automatic Disk Caching,” http://www.raxco.com/business/perfectcache—server.aspx, last visited Oct. 31, 2012. |
Pivot3, “Pivot3 announces IP-based storage cluster,” www.pivot3.—com, Jun. 22, 2007. |
Plank, “A Tutorial on Reed-Solomon Coding for Fault Tolerance in RAID-like System,” Department of Computer Science, University of Tennessee, pp. 995-1012, Sep. 1997. |
Porter, “Operating System Transactions,” ACM 978-1-60558-752-3/09/10, published Oct. 1, 2009. |
Probert, “Windows Kernel Internals Cache Manager,” Microsoft Corporation, http://www.i.u-tokyo.ac.jp/edu/ training/ss/lecture/new-documents/ Lectures/15-CacheManager/Cache Manager.pdf, printed May 15, 2010. |
Ranaweera, 05-270RO, SAT: Write Same (10) command (41h), T10/05, Jul. 7, 2005, www.t10.org/ftp/t10/document.05/05-270r0.pdf, last visited Apr. 11, 2013. |
Rosen, Richard, “IntelliCache, Scalability and consumer SSDs,” blogs.citrix.com/2012/01/03/intellicache-scalability-and-consumer-ssds, Jan. 3, 2012, accessed Aug. 3, 2012. |
Rosenblum, “The Design and Implementation of a Log-Structured File System,” ACM Transactions on Computer Systems, vol. 10 Issue 1, Feb. 1992. |
Samsung Electronics, “Introduction to Samsung's Linux Flash File System—RFS Application Note”, Version 1.0, Nov. 2006. |
Seagate Technology LLC, “The Advantages of Object-Based Storage-Secure, Scalable, Dynamic Storage Devices, Seagate Research Technology Paper, TP-536” Apr. 2005. |
Sears, “Stasis: Flexible Transactional Storage,” OSDI '06: 7th USENIX Symposium on Operating Systems Design and Implementation, published Nov. 6, 2006. |
Seltzer, “File System Performance and Transaction Support”, University of California at Berkeley, published Jan. 1, 1992. |
Seltzer, “Transaction Support in a Log-Structured File System”, Harvard University Division of Applied Sciences, published Jan. 1, 1993 (Chapter 5, pp. 52-69). |
Seltzer, “Transaction Support in Read Optimized and Write Optimized File Systems,” Proceedings of the 16th VLDB Conference, Brisbane, Australia, published Jan. 1, 1990. |
Shimpi, Anand, The SSD Anthology: Understanding SSDs and New Drives from OCZ, Mar. 18, 2009, 69 pgs. |
Shu, “Data Set Management Commands Proposals for ATA8-ACS2,” Dec. 12, 2007, http://www.t13.org.Documents/UploadedDocuments/docs2008/e07154r6-Data13 Set—Management—Proposal—for—ATA-ACS2.pdf, printed Apr. 5, 2010. |
Singer, Dan, “Implementing MLC NAND Flash for Cost-Effective, High Capacity Memory,” M-Systems, White Paper, 91-SR014-02-8L, Rev. 1.1, Sep. 2003. |
Solid Data, Maximizing Performance through Solid State File-Caching, Best Practices Guide, http://soliddata.com/resources/pdf/bp-sybase.pdf, May 2000. |
Spansion, “Data Management Software (DMS) for AMD Simultaneous Read/Write Flash Memory Devices”, published Jul. 7, 2003. |
Spillane, “Enabling Transactional File Access via Lightweight Kernel Extensions”, Stony Brook University, IBM T. J. Watson Research Center, published Feb. 25, 2009. |
State Intellectual Property Office, Office Action, CN Application No. 200780050970.0, dated Jun. 29, 2011. |
State Intellectual Property Office, Office Action, CN Application No. 200780050970.0, dated Oct. 28, 2010. |
State Intellectual Property Office, Office Action, CN Application No. 200780051020.X, dated Nov. 11, 2010. |
State Intellectual Property Office, Office Action, CN Application No. 200780050983.8, dated May 18, 2011. |
State Intellectual Property Office, Office Action, CN Application No. 200780051020.X, dated Jul. 6, 2011. |
State Intellectual Property Office, Office Action, CN Application No. 200780051020.X, dated Nov. 7, 2011. |
State Intellectual Property Office, Office Action, CN Application No. 200780050970.0, dated Jan. 5, 2012. |
Steere, David et al., Efficient User-Level File Cache Management on the Sun Vnode Interface, School of Computer Science, Carnegie Mellon University, Apr. 18, 1990. |
Superspeed, “New Super Cache 5 on Servers,” http://www. superspeed.com/servers/supercache.php, last visited Oct. 31, 2013. |
Tal, “NAND vs. NOR Flash Technology,” M-Systems, www2.electronicproducts.com/PrintArticle.aspx?ArticleURL=FEBMSY1.feb2002.html, visited Nov. 22, 2010. |
Terry et al., U.S. Appl. No. 60/797,127, “Filesystem-aware Block Storage System, Apparatus, and Method,” filed May 3, 2006. |
U.S., Interview Summary for U.S. Appl. No. 10/372,734, dated Feb. 28, 2006. |
U.S., Notice of Allowance for U.S. Appl. No. 12/986,117, dated Apr. 4, 2013. |
U.S., Notice of Allowance for U.S. Appl. No. 12/986,117 dated Jun. 5, 2013. |
U.S., Office Action for U.S. Appl. No. 12/879,004 dated Feb. 25, 2013. |
U.S., Office Action for U.S. Appl. No. 13/607,486 dated Jan. 10, 2013. |
U.S., Office Action for U.S. Appl. No. 10/372,734, dated Sep. 1, 2005. |
U.S., Office Action for U.S. Appl. No. 11/952,113, dated Dec. 15, 2010. |
U.S., Office Action for U.S. Appl. No. 12/711,113, dated Jun. 6, 2012. |
U.S., Office Action for U.S. Appl. No. 12/711,113, dated Nov. 23, 2012. |
U.S., Office Action for U.S. Appl. No. 13,607,486 dated May 2, 2013. |
U.S., Office Action for U.S. Appl. No. 13/118,237 dated Apr. 22, 2013. |
U.S., Office Action, U.S. Appl. No. 11/952,109, dated May 1, 2013. |
U.S., Office Action, U.S. Appl. No. 11/952,109, dated Nov. 29, 2011. |
Van Hensbergen, IBM Research Report, “Dynamic Policy Disk Caching for Storage Networking,” IBM Research Division, Computer Science, RC24123 (WO611-189), Nov. 28, 2006. |
VMware, Introduction to VMware vSphere, http://www.vmware.com/pdf/vsphere4/r40/vsp—40—intro—vs.pdf, 2009, accessed Aug. 1, 2012. |
VMware, Virtual Disk API Programming Guide, Virtual Disk Development Kit 1.2, Nov. 2010, accessed Aug. 3, 2012. |
Volos, “Mnemosyne: Lightweight Persistent Memory”, ACM 978-1-4503-0266-1/11/03, published Mar. 5, 2011. |
Wacha, “Improving RAID-Based Storage Systems with Flash Memory,” First Annual ISSDM/SRL Research Symposium, Oct. 20-21, 2009. |
Walp, “System Integrated Flash Storage,” Microsoft Corporation, 2008, http://download.microsoft.com/download/5/E/6/5E66B27B-988B-4F50-AF3A-C2FF1E62180F/COR-T559—WHO8.pptx, Printed Apr. 6, 2010, 8 pgs. |
Wang, “OBFS: A File System for Object-based Storage Devices,” Apr. 2004. |
Wikipedia, “Object Storage Device,” http://en.wikipedia.org/wiki/Object-storage-device, last visited Apr. 29, 2010. |
Winnett, Brad, “S2A9550 Overview,” White Paper, http://www.ddn.com/pdfs/ddn—s2a—9550—white—paper.pdf, Jul. 2006, 27 pgs. |
WIPO, International Preliminary Report of Patentability for PCT/US2007/086691, dated Feb. 16, 2009. |
WIPO, International Preliminary Report on Patentability for PCT/US2007/086688, dated Mar. 16, 2009. |
WIPO, International Preliminary Report on Patentability for PCT/US2007/086701, dated Mar. 16, 2009. |
WIPO, International Preliminary Report on Patentability for PCT/US2007/086687, dated Mar. 18, 2009. |
WIPO, International Preliminary Report on Patentability for PCT/US2007/025048, dated Jun. 10, 2009. |
WIPO, International Preliminary Report on Patentability for PCT/US2010/048325, dated Mar. 13, 2012. |
WIPO, International Search Report and Written Opinion for PCT/US2007/086691, dated May 8, 2008. |
WIPO, International Search Report and Written Opinion for PCT/US2007/025049, dated May 14, 2008. |
WIPO, International Search Report and Written Opinion for PCT/US2007/025048, dated May 27, 2008. |
WIPO, International Search Report and Written Opinion for PCT/US2007/086701, dated Jun. 5, 2008. |
WIPO, International Search Report and Written Opinion for PCT/US2007/086687, dated Sep. 5, 2008. |
WIPO, International Search Report and Written Opinion for PCT/US2011/65927, dated Aug. 28, 2012. |
WIPO, International Search Report and Written Opinion for PCT/US2012/029722, dated Oct. 30, 2012. |
WIPO, International Search Report and Written Opinion for PCT/US2012/039189, dated Dec. 27, 2012. |
WIPO, International Search Report and Written Opinion PCT/US2010/025885, dated Sep. 28, 2011. |
WIPO, International Search Report and Written Opinion PCT/US2012/050194, dated Feb. 26, 2013. |
Woodhouse, David, “JFFS: The Journaling Flash File System,” Red Hat, Inc., http://sourceware.org/jffs2/jffs2.pdf, visited Jun. 22, 2010. |
Wright, “Extending ACID Semantics to the File System”, ACM Transactions on Storage, vol. 3, No. 2, published May 1, 2011, pp. 1-40. |
Wu, “eNVy: A Non-Volatile, Main Memory Storage System,” ACM 0-89791-660-3/94/0010, ASPLOS-VI Proceedings of the sixth international conference on Architectural support for programming languages and operating systems, pp. 86-97, 1994. |
Yang, “A DCD Filter Driver for Windows NT 4,” Proceedings of the 12th International Conference on Computer Applications in Industry and Engineering (CAINE-99), Atlanta, Georgia, USA, Nov. 4-6, 1999. |
Yerrick, “Block Device,” http://www.pineight.com/ds/block, last visited Mar. 1, 2010. |
U.S., Office Action for U.S. Appl. No. 14/262,581 dated Jun. 19, 2014. |
U.S., Office Action Interview Summary for U.S. Appl. No. 13/541,659 dated Aug. 26, 2014. |
U.S., Office Action for U.S. Appl. No. 13/687,979 dated Sep. 9, 2014. |
U.S., Office Action for U.S. Appl. No. 13/192,365 dated Jul. 17, 2014. |
U.S., Office Action for U.S. Appl. No. 13/287,998 dated Jun. 10, 2014. |
U.S., Office Action for U.S. Appl. No. 13/288,005 dated Jul. 8, 2014. |
Number | Date | Country | |
---|---|---|---|
20140281131 A1 | Sep 2014 | US |