Storage systems in general, and block based storage systems specifically, are a key element in modern data centers and computing infrastructure. These systems are designed to store and retrieve large amounts of data, by providing data block address and data block content—for storing a block of data—and by providing a data block address for retrieval of the data block content that is stored at the specified address.
Storage solutions are typically partitioned into categories based on a use case and application within a computing infrastructure, and a key distinction exists between primary storage solutions and archiving storage solutions. Primary storage is typically used as the main storage pool for computing applications during application run-time. As such, the performance of primary storage systems is very often a key challenge and a major potential bottleneck in overall application performance, since storage and retrieval of data consumes time and delays the completion of application processing. Storage systems designed for archiving applications are much less sensitive to performance constraints, as they are not part of the run-time application processing.
In general computer systems grow over their lifetime and the data under management tends to grow over the system lifetime. Growth can be exponential, and in both primary and archiving storage systems, exponential capacity growth typical in modern computing environment presents a major challenge as it results in increased cost, space, and power consumption of the storage systems required to support ever increasing amounts of information.
Existing storage solutions, and especially primary storage solutions, rely on address-based mapping of data, as well as address-based functionality of the storage system's internal algorithms. This is only natural since the computing applications always rely on address-based mapping and identification of data they store and retrieve. However, a completely different scheme in which data, internally within the storage system, is mapped and managed based on its content instead of its address has many substantial advantages. For example, it improves storage capacity efficiency since any duplicate block data will only occupy actual capacity of a single instance of that block. As another example, it improves performance since duplicate block writes do not need to be executed internally in the storage system. Existing storage systems, either primary storage systems or archiving storage systems are incapable of supporting the combination of content based storage—with its numerous advantages—and ultra-high performance. This is a result of the fact that the implementation of content based storage scheme faces several challenges:
(a) intensive computational load which is not easily distributable or breakable into smaller tasks,
(b) an inherent need to break large blocks into smaller block sizes in order to achieve content addressing at fine granularity. This block fragmentation dramatically degrades the performance of existing storage solutions,
(c) inability to maintain sequential location of data blocks within the storage systems, since mapping is not address based any more, and such inability causes dramatic performance degradation with traditional spinning disk systems,
(d) the algorithmic and architectural difficulty in distributing the tasks associated with content based mapping over a large number of processing and storage elements while maintaining single content-addressing space over the full capacity range of the storage system.
A number of issues arise with respect to such devices, and it is necessary to consider such issues as performance, lifetime and resilience to failure of individual devices, overall speed of response and the like.
Such devices may be used in highly demanding circumstances where failure to process data correctly can be extremely serious, or where large scales are involved, and where the system has to be able to cope with sudden surges in demand.
In one aspect, a method includes splitting empty RAID stripes into sub-stripes and storing pages into the sub-stripes based on a compressibility score. In another aspect, a method includes reading pages from 1-stripes, storing compressed data in a temporary location, reading multiple stripes, determining compressibility score for each stripe and filling stripes based on the compressibility score. In a further aspect, a method includes scanning a dirty queue in a system cache, compressing pages ready for destaging, combining compressed pages in to one aggregated page, writing one aggregated page to one stripe and storing pages with same compressibility score in a stripe.
In one aspect, an apparatus includes electronic hardware circuitry configured to split empty RAID stripes into sub-stripes and store pages into the sub-stripes based on a compressibility score. In another aspect, an apparatus includes electronic hardware circuitry configured to read pages from 1-stripes, store compressed data in a temporary location, read multiple stripes, determine compressibility score for each stripe and fill stripes based on the compressibility score. In a further aspect, an apparatus includes electronic hardware circuitry configured to scan a dirty queue in a system cache, compress pages ready for destaging, combine compressed pages in to one aggregated page, write one aggregated page to one stripe and store pages with same compressibility score in a stripe.
In several aspects, an article includes a non-transitory computer-readable medium that stores computer-executable instructions. In one aspect, the instructions cause a machine to split empty RAID stripes into sub-stripes and store pages into the sub-stripes based on a compressibility score. In another aspect, the instructions cause a machine to read pages from 1-stripes, store compressed data in a temporary location, read multiple stripes, determine compressibility score for each stripe and fill stripes based on the compressibility score. In a further aspect, the instructions cause a machine to scan a dirty queue in a system cache, compress pages ready for destaging, combine compressed pages in to one aggregated page, write one aggregated page to one stripe and store pages with same compressibility score in a stripe.
Described herein are data reduction techniques that may be used in a flash-based key/value cluster storage array. The techniques described herein enable the array to compress/decompress much faster than previously know techniques, while preserving the array logical block structure and all other services including, but not limited to, deduplication, snapshots, replication and so forth. In one example, pages can be compressed and decompressed in parallel, giving a significant performance boost.
In a Content Addressable Storage (CAS) array, data is stored in blocks, for example of 4 KB, where each block has a unique large hash signature, for example of 20 bytes, saved on Flash memory.
The examples described herein include a networked memory system. The networked memory system includes multiple memory storage units arranged for content addressable storage of data. The data is transferred to and from the storage units using separate data and control planes. Hashing is used for the content addressing, and the hashing produces evenly distributed results over the allowed input range. The hashing defines the physical addresses so that data storage makes even use of the system resources.
A relatively small granularity may be used, for example with a page size of 4 KB, although smaller or larger block sizes may be selected at the discretion of the skilled person. This enables the device to detach the incoming user access pattern from the internal access pattern. That is to say the incoming user access pattern may be larger than the 4 KB or other system-determined page size and may thus be converted to a plurality of write operations within the system, each one separately hashed and separately stored.
Content addressable data storage can be used to ensure that data appearing twice is stored at the same location. Hence unnecessary duplicate write operations can be identified and avoided. Such a feature may be included in the present system as data deduplication. As well as making the system more efficient overall, it also increases the lifetime of those storage units that are limited by the number of write/erase operations.
The separation of Control and Data may enable a substantially unlimited level of scalability, since control operations can be split over any number of processing elements, and data operations can be split over any number of data storage elements. This allows scalability in both capacity and performance, and may thus permit an operation to be effectively balanced between the different modules and nodes.
The separation may also help to speed the operation of the system. That is to say it may speed up Writes and Reads. Such may be due to:
(a) Parallel operation of certain Control and Data actions over multiple Nodes/Modules
(b) Use of optimal internal communication/networking technologies per the type of operation (Control or Data), designed to minimize the latency (delay) and maximize the throughput of each type of operation.
Also, separation of control and data paths may allow each Control or Data information unit to travel within the system between Nodes or Modules in the optimal way, meaning only to where it is needed and if/when it is needed. The set of optimal where and when coordinates is not the same for control and data units, and hence the separation of paths ensures the optimization of such data and control movements, in a way which is not otherwise possible. The separation is important in keeping the workloads and internal communications at the minimum necessary, and may translate into increased optimization of performance.
De-duplication of data, meaning ensuring that the same data is not stored twice in different places, is an inherent effect of using Content-Based mapping of data to D-Modules and within D-Modules.
Scalability is inherent to the architecture. Nothing in the architecture limits the number of the different R, C, D, and H modules which are described further herein. Hence any number of such modules can be assembled. The more modules added, the higher the performance of the system becomes and the larger the capacity it can handle. Hence scalability of performance and capacity is achieved.
The principles and operation of an apparatus and method according to the present invention may be better understood with reference to the drawings and accompanying description.
Reference is now made to
The control modules 14 may control execution of read and write commands. The data modules 16 are connected to the storage devices and, under control of a respective control module, pass data to or from the storage devices. Both the C and D modules may retain extracts of the data stored in the storage device, and the extracts may be used for the content addressing. Typically the extracts may be computed by cryptographic hashing of the data, as will be discussed in greater detail below, and hash modules (
Routing modules 18 may terminate storage and retrieval operations and distribute command parts of any operations to control modules that are explicitly selected for the operation in such a way as to retain balanced usage within the system 10.
The routing modules may use hash values, calculated from data associated with the operations, to select the control module for the distribution. More particularly, selection of the control module may use hash values, but typically relies on the user address and not on the content (hash). The hash value is, however, typically used for selecting the Data (D) module, and for setting the physical location for data storage within a D module.
The storage devices may be solid state random access storage devices, as opposed to spinning disk devices; however disk devices may be used instead or in addition.
A deduplication feature may be provided. The routing modules and/or data modules may compare the extracts or hash values of write data with hash values of already stored data, and where a match is found, simply point to the matched data and avoid rewriting.
The modules are combined into nodes 20 on the network, and the nodes are connected over the network by a switch 22.
The use of content addressing with multiple data modules selected on the basis of the content hashing, and a finely-grained mapping of user addresses to Control Modules allow for a scalable distributed architecture.
A glossary is now given of terms used in the following description:
X-PAGE-A predetermined-size aligned chunk as the base unit for memory and disk operations. Throughout the present description the X-Page size is referred to as having 4 KB, however other smaller or larger values can be used as well and nothing in the design is limited to a specific value.
LUN or LOGICAL UNIT NUMBER, is a common name in the industry for designating a volume of data, or a group of data blocks being named with the LUN. Each data block is referred to, by the external user of the storage system, according to its LUN, and its address within this LUN
LOGICAL X-PAGE ADDRESS-Logical address of an X-Page. The address contains a LUN identifier as well as the offset of the X-Page within the LUN.
LOGICAL BLOCK-512 bytes (sector) aligned chunk, which is the SCSI base unit for disk operations.
LOGICAL BLOCK ADDRESS-Logical address of a Logical Block. The logical block address contains a LUN identifier as well as the offset of the logical block within the LUN.
SUB-LUN—Division of a LUN to smaller logical areas, to balance the load between C modules. Each such small logical area is called a sub-LUN.
SUB-LUN UNIT SIZE—The fixed size of a sub-LUN. X-Page Data—Specific sequence of user data values that resides in an X-Page. Each such X-Page Data is uniquely represented in the system by its hash digest.
D PRIMARY—The D module responsible for storing an X-Page's Data
D BACKUP—The D module responsible for storing a backup for an X-Page Data. The backup is stored in a non-volatile way (NVRAM or UPS protected).
Acronyms:
LXA—Logical X-Page Address.
LB—Logical Block.
LBA—Logical Block Address.
AUS—Atomic Unit Size.
SL—Sub-LUN.
SLUS—Sub-LUN Unit Size.
MBE—Management Back End.
The examples described herein to a block-level storage system, offering basic and advanced storage functionality. The design may be based on a distributed architecture, where computational, Storage Area Networking (SAN), and storage elements are distributed over multiple physical Nodes, with all such Nodes being inter-connected over an internal network through a switch device. The distributed architecture enables the scaling of the system's capabilities in multiple aspects, including overall storage capacity, performance characteristics in bandwidth and I/O operations per second (IOPS), computational resources, internal and external networking bandwidth, and other. While being based on a distributed architecture, the system presents, externally, a unified storage system entity with scalable capabilities.
The system's architecture and internal algorithms implementing the basic and advanced storage functions are optimized for improved utilization of the capabilities of random-access memory/storage media, as opposed to contrast with mechanical-magnetic spinning disk storage media. The optimizations are implemented in the design itself, and may, for example, include the ability to break incoming writes into smaller blocks and distribute the operation over different Nodes. Such an adaptation is particularly suitable for random access memory/storage media but is less suitable in a spinning-disk environment, as it would degrade performance to extremely low levels. The adaptation includes the content/hash based mapping of data distributes the data over different D Nodes in general and within D Nodes over different SSD devices. Again, such a scheme is more suitable for random access memory/storage media than for a spinning-disk media because such spread of data blocks would result in very poor performance in the spinning disk case. That is to say, the described elements of the present architecture are designed to work well with random access media, and achieve benefits in performance, scalability, and functionality such as inline deduplication. Such random-access memory media can be based on any or a combination of flash memory, DRAM, phase change memory, or other memory technology, whether persistent or non-persistent, and is typically characterized by random seek/access times and random read/write speeds substantially higher than those exhibited by spinning disk media. The system's internal data block mapping, the algorithms implementing advanced storage functions, and the algorithms for protecting data stored in the system are designed to provide storage performance and advanced storage functionality at substantially higher performance, speed, and flexibility than those available with alternative storage systems.
Data mapping within the system is designed not only to improve performance, but also to improve the life span and reliability of the electronic memory media, in cases where the memory technology used has limitations on write/erase cycles, as is the case with flash memory. Lifetime maximization may be achieved by avoiding unnecessary write operations as will be explained in greater detail below. For the purpose of further performance optimization, life span maximization, and cost optimization, the system may employ more than a single type of memory technology, including a mix of more than one Flash technology (e.g., single level cell-SLC flash and multilevel cell—MLC flash), and a mix of Flash and DRAM technologies. The data mapping optimizes performance and life span by taking advantage of the different access speeds and different write/erase cycle limitations of the various memory technologies.
The core method for mapping blocks of data internally within the system is based on Content Addressing, and is implemented through a distributed Content Addressable Storage (CAS) algorithm.
This scheme maps blocks of data internally according to their content, resulting in mapping of identical block to the same unique internal location. The distributed CAS algorithm allows for scaling of the CAS domain as overall system capacity grows, effectively utilizing and balancing the available computational and storage elements in order to improve overall system performance at any scale and with any number of computational and storage elements.
The system supports advanced In-line block level deduplication, which may improve performance and save capacity.
Elements of the system's functionality are: Write (store) data block at a specified user address; Trim data block at a specified user address; Read data block from a specified user address; and In-line block level deduplication.
The following features may be provided: (1) A distributed CAS based storage optimized for electronic random-access storage media; The optimization includes utilizing storage algorithms, mainly the content-based uniformly-distributed mapping of data, that inherently spread data in a random way across all storage devices. Such randomization of storage locations within the system while maintaining a very high level of performance is preferably achievable with storage media with a high random access speed; (2) A distributed storage architecture with separate control and data planes; Data mapping that maximizes write-endurance of storage media; System scalability; (3) System resiliency to fault and/or failure of any of its components; (4) Use of multi-technology media to maximize write-endurance of storage media; and (5) In-line deduplication in ultrahigh performance storage using electronic random-access storage media.
The examples described herein implement block storage in a distributed and scalable architecture, efficiently aggregating performance from a large number of ultra-fast storage media elements (SSDs or other), preferably with no performance bottlenecks, while providing in-line, highly granular block-level deduplication with no or little performance degradation.
One challenge is to avoid performance bottlenecks and allow performance scalability that is independent of user data access patterns.
The examples described herein may overcome the scalability challenge by providing data flow (Write, Read) that is distributed among an arbitrary and scalable number of physical and logical nodes. The distribution is implemented by (a) separating the control and data paths (the “C” and “D” modules), (b) maintaining optimal load balancing between all Data modules, based on the content of the blocks (through the CAS/hashing mechanisms), hence ensuring always balanced load sharing regardless of user access patterns, (c) maintaining optimal load balancing between all Control modules, based on the user address of the blocks at fine granularity, hence ensuring always balanced load sharing regardless of user access patterns, and (d) performing all internal data path operations using small granularity block size, hence detaching the incoming user access pattern from the internal access pattern, since the user pattern is generally larger than the block size.
A second challenge is to support inline, highly granular block level deduplication without degrading storage (read/write speed) performance. The result should be scalable in both capacity—which is deduplicated over the full capacity space—and performance.
The solution involves distributing computation-intensive tasks, such as calculating cryptographic hash values, among an arbitrary number of nodes. In addition, CAS metadata and its access may be distributed among an arbitrary number of nodes. Furthermore, data flow algorithms may partition read/write operations in an optimally-balanced way, over an arbitrary and scalable number of Nodes, while guaranteeing consistency and inline deduplication effect over the complete storage space.
In detaching the data from the incoming pattern, the R-Module breaks up any incoming block which is larger than the granularity size across sub-LUNs, sending the relevant parts to the appropriate C-Modules. Each C-module is predefined to handle a range or set of Sub-LUN logical addresses. The C-Module breaks up the block it receives for distribution to D-Modules, at a pre-determined granularity, which is the granularity for which a Hash is now calculated. Hence the end result is that a request to write a certain block (for example of size 64 KB) ends up being broken up into for example 16 internal writes, each write comprising a 4 KB block.
The specific numbers for granularity can be set based on various design tradeoffs, and the specific number used herein of 4 KB is merely an example. The broken down blocks are then distributed to the D modules in accordance with the corresponding hash values.
A further challenge is to address flash-based SSD write/erase cycle limitations, in which the devices have a lifetime dependent on the number of write/erase cycles.
The solution may involve Inline deduplication to avoid writing in all cases of duplicate data blocks. Secondly, content (hash) based mapping to different data modules and SSDs results in optimal wear-leveling, ensuring equal spread of write operations to all data modules and SSDs independently of the user data/address access patterns.
In the following a system is considered from a functional point of view. As described above with respect to
Reference is now made to
A function of the R Module 202 is to terminate SAN Read/Write commands and route them to appropriate C and D Modules for execution by these Modules. By doing so, the R Module can distribute workload over multiple C and D Modules, and at the same time create complete separation of the Control and Data planes, that is to say provide separate control and data paths.
A function of the C Module 204 is to control the execution of a Read/Write command, as well as other storage functions implemented by the system. It may maintain and manage key metadata elements.
A function of the D Module 206 is to perform the actual Read/Write operation by accessing the storage devices 208 (designated SSDs) attached to it. The D module 206 may maintain metadata related with the physical location of data blocks.
A function of the H Module is to calculate the Hash function value for a given block of data.
Reference is now made to
In
All Nodes include a switch interface 308, to allow interconnecting with a switch in a multi-Node system configuration. A Node that contains a SAN function includes at least one SAN Interface module 310 and at least one R Module. A Node that contains a Store function includes at least one SSD Driver Module 312 and at least one D Module. Hence, Compute+SAN and Compute+SAN+STORE Nodes contain a SAN Interface, to interface with the external SAN. The interface may typically use a SCSI-based protocol running on any of a number of interfaces including Fiber Channel, Ethernet, and others, through which Read/Write and other storage function commands are being sent to the system. Compute+Store and Compute+SAN+Store Nodes contain an SSD driver 312 to interface with SSDs 208 attached to that specific Node, where data is stored and accessed.
Reference is now made to
The interconnections between each Node and the Switch may include redundancy, so as to achieve high system availability with no single point of failure. In such a case, each Node may contain two or more Switch Interface modules 406, and the Switch may contain two or more ports per physical Node.
As an example
A four node system configuration is shown in
A system that is built from multiple physical Nodes can inherently support a high availability construction, where there is no single point of failure. This means that any Node or sub-Node failure can be compensated for by redundant Nodes, having a complete copy of the system's meta-data, and a complete redundant copy of stored data (or parity information allowing recovery of stored data). The distributed and flexible architecture allows for seamless support of failure conditions by simply directing actions to alternate Nodes.
The R module is responsible for: routing SCSI I/O requests to the C modules, guarantee execution and return the result; and balancing the work load between the C modules for the requests it is routing.
An A→C table indicates which C module is responsible for each logical X-page address (LXA). Each C module is responsible for a list of Sub LUNs (SLs).
The R module receives requests for I/Os from the SAN INTERFACE, routes them to the designated C modules and returns the result to the SAN INTERFACE.
If an I/O operation spans across multiple SLs, and perhaps multiple C modules, then the R module has the responsibility of breaking the big I/O operation into multiple smaller independent operations according to the sub LUN unit size (SLUS). Since the atomic unit size (AUS) is never larger than the SLUS, as explained in greater detail below, each such I/O is treated as an independent operation throughout the system. The results may then be aggregated before returning to the SAN INTERFACE.
The R module is responsible for maintaining an up-to-date A→C table coordinated with the MBE. The A→C table is expected to balance the range of all possible LXAs between the available C modules.
For write operations, the R module instructs the calculation of the hash digest for each X-Page by requesting such calculation from a Hash calculation module.
The C module is responsible for: receiving an I/O request from an R module on a certain SL, guaranteeing its atomic execution and returning the result; communicating with D modules to execute the I/O requests; monitoring the disk content of its SLs' logical space by associating each LXA with its hash digest; and balancing the work load between the D modules for the SLs it is maintaining.
An H→D table maps each range of hash digests to the corresponding D module responsible for this range.
An A→H table maps each LXA that belongs to the SLs C is responsible for, to the hash digest representing the X-Page Data that currently resides in this address.
The C module receives I/O requests from R modules, distributes the work to the D modules, aggregates the results and guarantees an atomic operation. The result is returned to the R module.
The C module maintains an up-to-date H→D table coordinated with the MBE. The table is expected to balance the range of all possible hash digests between the available D modules.
The C module maintains an A→H table in a persistent way. The C module may initiate 110 requests to D modules in order to save table pages to disk, and read them from disk. To avoid frequent disk operations, a Journal of the latest table operations may be maintained.
Data is balanced between the C modules based on the logical address, at the granularity of sub-LUNs.
The D module is responsible for: maintaining a set of LUNs which are attached locally and performing all I/O operations on these LUN; managing the physical layout of the attached LUNs; managing the mapping between X-Page Data hash digests and their physical location in a persistent way; managing deduplication of X-Page Data in a persistent way; and receiving disk I/O requests from C modules, perform them and returning a result.
The D module is also responsible for, for each write operation, backing up the X-Page Data in the designated D backup module and performing read-modify operations for writes that are smaller than X-Page size (This process also involves computing a hash digest for these X-Pages).
The D module is further responsible for maintaining an up-to-date H→(D, Dbackup) table coordinated with the MBE. The H→(D, Dbackup) table is expected to balance the range of all possible hash digests between the available D modules.
The D module does not communicate directly with R modules. The only interaction with R modules involves RDMA read/write operations of X-Page Data.
Balancing between the D modules is based on hashing of the content.
The D module makes use of a hash digest metadata table. The hash digest metadata table maps each in use hash digest, that represents actual X-Page Data, to its meta data information including its physical page on the storage media (SSD), its memory copy (if exists), a mapping to any backup memory copy and a reference count for the purpose of deduplication.
A further structure used is the H→(D, Dbackup) table. The H→(D, Dbackup) table maps each range of hash digests to the corresponding D module responsible for the range as well as the D module responsible for the range.
The D modules allocate a physical page for each X-Page. The D modules also manage the memory for the physical storage. They allocate memory pages for read/write operations and perform background destaging from memory to storage media when necessary, for example, when running low on memory.
The D modules manage a separate nonvolatile memory pool (NVRAM or UPS protected) for X-Page Data backup purposes. The backup holds X-Pages that are held in memory of the D primary and have not yet been destaged. When re-balancing between D modules occurs (due to a D module failure for example), the D module may communicate with other D modules in order to create new backup copies or move a primary ownership as required.
The D modules allow deduplication per X-Page Data by maintaining a persistent reference count that guarantees only one copy per X-Page Data. The D modules manage the hash digest metadata table in a persistent way. The table is coordinated with the physical layout for physical pages allocation, with the memory pointer, memory backup pointer and deduplication reference count.
The D modules receive I/O requests from C modules, perform the requests while supporting deduplication and return the result. The D modules may perform RDMA read/write operations on memory that resides in other modules, such as R modules as mentioned above, as part of the I/O operation.
When a write operation smaller than the size of an X-Page is received, the D module may read the entire X-Page to memory and perform partial X-Page modification on that memory. In this case race conditions may occur, for example when two small writes to the same X-Page occur in parallel, and the D module may be required to compute the hash digest of the resulting X-Page. This is discussed in greater detail below.
The H-Module calculates the Hash function of a given block of data, effectively mapping an input value to a unique output value. The Hash function may be based on standards based hash functions such as SHA-1 and MD5, or based on a proprietary function. The hash function is selected to generate a uniformly distributed output over the range of potential input values.
The H modules usually share nodes with an R module but more generally, the H modules can reside in certain nodes, in all nodes, together with R modules, or together with C or D modules.
The following discussion provides high level I/O flows for read, write and trim.
Throughout these flows, unless noted otherwise, control commands are passed between modules using standard RPC messaging, while data “pull” operations may use RDMA read. Data push (as well as Journal) operations may use RDMA write.
The read flow of one X-Page may consist of one R module which receives the read request from the application, one C module in charge of the address requested and one D module which holds the X-Page to be read. Larger, or unaligned, requests may span several X-Pages and thus may involve several D modules. These requests may also span several SLs, in which case they may involve several C modules as well.
Reference is now made to
The C module, when receiving the request, consults the A→H component, from which it obtains a hash digest representing the X-Page to be read; consults the H→D component to determine which D module holds the X-Page in question; and sends this D module a read request which includes parameters that include a request ID (as received from the R module), the hash digest, a pointer to the buffer to read to, as received from the R module; and an identifier of the R module.
The D module, when receiving the request, reads the data of the requested X-Page from SSD and performs an RDMA write to the requesting R module, specifically to the pointer passed to it by the C module.
Finally the D module returns success or error to the requesting C module.
The C module in turn propagates success or error back to the requesting R module, which may then propagate it further to answer the application.
Reference is now made to
The rest of the R module's treatment is identical to the aligned one X-Page scenario previously described herein.
The C module, when receiving the request divides the logical address space to LXAs. For each LXA the C module consults the A→H component to determine the corresponding hash digest; consults the H→D table to determine which D module is responsible for the current LXA; sends each D module a read command containing all the hashes that the respective D module is responsible for. The parameters of the read command include a request ID (as received from the R module); a list of respective hash-pointer pairs; and the identifier of the R module.
Each D module, when receiving the request, acts per hash-pointer pair in the same manner as described above for one X-Page. Aggregated success or error is then sent to the requesting C module.
The C module aggregates all the results given to it by the D modules and return success or error back to the requesting R module, which may then answer the application.
In the case that a read request spans multiple SLs, the R module splits the request and sends several C modules read requests. Each C module may receive one request per SL. The flow may continue as in the simpler case above, except that now the R module aggregates the responses before it answers the application.
Read requests smaller than 4 KB, as well as requests not aligned to 4 KB, may be dealt with at the R module level. For each such parcel of data, the R module may request to read the encompassing X-Page. Upon successful completion of the read command, the R module may crop the non-relevant sections and return only the requested data to the application.
The write flow of one X-Page may consist of one R module which receives the write request from the application, one C module in charge of the address requested and three D modules: Dtarget which is in charge of the X-Page Data to be written (according to its appropriate hash digest), Dold which was in charge of the X-Page Data this address contained previously (“old” hash digest), and Dbackup in charge of storing a backup copy of the X-Page Data to be written.
Reference is now made to
When an R module receives a write request from the application, the R module allocates a request ID for this operation; translates the LBA to an LXA; computes a hash digest on the data to be written; consults its A→C component to determine which C module is in charge of the current LXA; and sends the designated C module a write command with parameters that include a request ID; an LXA; a hash digest; and a pointer to the buffer containing the data to be written.
The C module, when receiving the request consults its H→D component to understand which D module is in charge of the X-Page to be written (Dtarget); and sends Dtarget a write request with parameters that include the request ID (as received from the R module); the hash digest (as received from the R module); the pointer to the data to write (as received from the R module); and the identifier of the R module.
The D module receiving the write command, Dtarget, may first check if it already holds an X-Page corresponding to this hash. There are two options here:
First, Dtarget does not have the X-Page. In this case Dtarget fetches the data from the R module using RDMA read and stores it in its memory; consults the H→D component to determine which D module is in charge of storing a backup copy of this X-Page (Dbackup); performs an RDMA write of the X-Page Data to the Dbackup backup memory space; and returns success (or failure) to the C module.
Second, Dtarget has the X-Page. In this case Dtarget increases the reference count, returns success (or failure) to the C module.
The C module waits for a response from Dtarget. If a success is returned, the C module updates the A→H table to indicate that the LXA in question should point to the new hash and returns a response to the requesting R module.
If this is not a new entry in the A→H table, the C module asynchronously sends a decrease reference count command to Dold (the D module responsible for the hash digest of the previous X-Page Data). These commands may be aggregated at the C module and sent to the D modules in batches.
The R module may answer the application once it receives a response from the C module.
Reference is now made to
In the case that the write request spans a range of addresses which include more than one X-Page but only one SL, the R module sends the designated C module a write command with parameters that include a request ID; a first LXA; a size of the requested write in LXAs-n; and HBIG which is a unique identifier of the entire chunk of data to be written. HBIG may be a computed hash digest and thus equal for two identical chunks of data.
Additional parameters sent with the write command are n pointers that point to the buffers which hold the data to be written.
The rest of the R module treatment is the same as for the aligned one X-Page scenario.
The C module, when receiving the request, consults its H→D component to understand which D module is in charge of HBIG (Dtarget) and generates a hash digest per pointer by replacing one byte of HBIG with the offset of that pointer. It is noted that this byte must not collide with the bytes used by the H→D table distribution.
It may send Dtarget a write request with the parameters that include the request ID (as received from the R module); a list of respective hash-pointer pairs; and the Identifier of the R module.
The D module, when receiving the request, acts per hash-pointer pair in the same manner as described above for one X-Page. Aggregated success or error is then sent to the requesting C module.
The C module waits for a response from Dtarget. If the response indicates success, the C module updates its A→H table to indicate that the LXAs in question should point to the new hashes. Updating of entries in the A→H table may be done as an atomic operation, to ensure the write request is atomic. Note that all requests aligned to 4 KB (or another predefined block size) that fall within a SL may be atomic. The C module returns a response to the requesting R module. The C module adds the list of old hashes to the “decrease reference” batch if needed.
The R module answers the application once it receives a response from the C module.
In the case in which a write request spans multiple SLs, the R module splits the request and sends smaller write requests to several C modules. Each C module receives one request per SL (with a unique request ID). The flow continues as in the simpler case above, except that now the R module aggregates the responses before it answers the application.
Referring to
Referring to
Referring to
Process 800 finds S*N pages with a score S (822) and stores the pages into a corresponding S-stripe (826). When data is read from a compressed sub-page, it is uncompressed on the fly. Consider for example a 64 KB read command. This command triggers 16 page read operations (each of 4 KB). If the data is compressed, the array will decompress 16 pages. However, unlike other systems, the decompress operations can be run in parallel. For example, in a large cluster it is likely that the 16 pages are processed by different CPUs or different threads in multi-core CPUs, and the 16 decompress operations will happen simultaneously. This effectively improves the performance of decompress by a factor of 16, and instead of a high penalty on the 64 KB read there is very little if any penalty.
Processes 700 and 800 change the behavior of the backend 614. Every page received is compressed, but existing stripes are not split.
Referring to
Process 900 destages data into 1-stripes (902). For example, user write operations result in the normal destaging of data into 1-stripes, where destaging is the writing of the data to the disks (e.g., writing data from 614 to 616).
Process 900 reads pages from the 1-stripes (906) and attempts to compress the pages (910). Process 900 adds metadata to each page (916). For example, the metadata may include one or more of the following: an indicator of whether the page is compressed or not; an indicator of what method was used to compress the page; one or more indicators indicating compression method attempted.
Process 900 determines if the page was compressed (920). For example, the process 900 reads the metadata added in processing block 916.
Process 900 stores compressed data in a temporary location (922), reads j 1-stripes (e.g., 100), determines the compressibility score of each of pages in the stripes (928) and fills the stripes (i.e., write the compressed pages to the stripes corresponding to their compression level) (938). j is greater than or equal to 1. In one particular example, j=100 so that there are 100*N pages with different scores. In one example, process 900 first tries to fill as many 8-stripes as possible (i.e., it looks for 8N pages with a score greater than or equal to 8). Then it tries to fill as many 4-stripes as possible (looking for 4N pages of score greater than or equal to 4). Then it tries to fill as many 2-stripes as possible (looking for 2N pages of score greater than or equal to 2). The remainder of the pages is stored in 1-stripes (i.e., no compression).
While it is unlikely that all the pages in a 1-stripe will have the same score, the old 1-stripes cannot be dismantled. However, if the stripes could be moved to an S-Stripe they could be dismantled. For example, processing block 702 may be used to break the 1-stripe into sub-stripes. Similarly, multiple adjacent empty sub-stripes may be combined into an empty 1-stripe.
Process 900 marks the old image (948). For example, the old image remains in place in the 1-Stripe with an indication (indicator) that it participates in the stripe but is unused. Eventually, these unused pages will be overwritten (in a similar way that pages with ref-count=0 eventually get overwritten.). Process 900 is ideal when the system resources are limited and, in particular, if a write cache is limited. Process 900 enables the system 600 to service I/Os as fast as possible during high activity, and compresses the data when activity is low.
Any of the processes described herein may be further modified. The data will be kept in the system's data cache uncompressed. Once the data reaches the persistent uncompressed data cache the write is acknowledge to the user (thus avoiding the latency penalty introduced by the compression of data for write.) The uncompressed cache acts as a queue for a background compression process, which attempts to compress all the pages and place the compressed data into the correct compressed queue of pages. These queues will act as inputs for regular destaging processes, while the original uncompressed data will be kept persistently in the cache. Once the destage process is over, the uncompressed data serves as regular cached data and we remove the compressed data from the queue. There are separate queues for some set types of data size (for instance <1K, <2K, 4K—compression scores of 4, 2, 1).
In another modification, additional types of queues may be introduced: for example, a queue for a different compression level (such as <3K—compression score 1.5). In another modification, other types of destaging scheme are introduced, for example: destaging one 1K and one 3K page to a single 4K page on disk). To add these modifications, another queue is added.
A significant benefit of this approach is that since the cache remains uncompressed, a read operation that results in a cache hit does not have a decompression latency penalty either, which is crucial for performance of applications that verify the data on write (like databases) or data hotspots.
Another benefit of this approach is that compression does not become the system's bottleneck. This is done by monitoring the state of the compression queue. Once the queue overfills a background process is introduced that destages pages immediately from the queue uncompressed, and writes them to normal 4K stripes. These pages are marked in the physical layout as uncompressed-compressible. Once the high load is over, an additional process evicts these pages to the persistent compression queue, where they are compressed in the same manner new data would have been compressed (or written as uncompressed-uncompressible if they can't be compressed). These two separate states: uncompressed-compressible and uncompressed-uncompressible allows an ability to differentiate between pages that are uncompressed since time was not available before to compress them, and pages that are uncompressed since an attempt was made to compress them and found out that they were not significantly compressible. This enables an ability to skip the latter kind of pages in the background compression algorithm (and not to repeatedly try to compress them only to fail every time.)
Referring to
Another enhancement over the previous methods is a method for dynamically assigning stripes of different width (1, 2, 4, 8 and so forth). A pre-defined division would not work since it is not certain, a-priori, of the amount of pages having different compression scores. Therefore, one may be tempted to use a greedy algorithm—i.e., having all the stripes defined as unassigned and finding the emptiest stripes of our type and writing to the pages currently free, or if none are found, picking an unassigned stripe and assigning it to the needed type. For example, if all data is written to the array that has a score of 4, i.e., all the stripes assigned as 1K stripes are full. Once the array is full, half of the data is deleted in such a pattern that only the odd physical addresses become free. In this situation—on the one hand the array is 50% free, yet on the other hand there is no ability to write even a single uncompressible page, since there is not a single continuous page of length 4K. To solve this, a background defragmentation process may be introduced. This process continuously takes the emptiest stripes of each compression level and attempts to consolidate as many of these stripes into one full stripe. The freed up stripes are returned to the pool of unassigned of stripes, where they can be assigned to any other type. Counters are kept on the amount of available stripes of each size, including a counter for the number of unassigned stripes. This process runs only when any of these counters are low.
Compression allows a data-verification method. A read flow involves compressed data: (1) host sends a read request; (2) front-end translates it to a list of pages that need to be read; (3) back-end RAID creates a list of compressed pages that need to be read and uncompressed; (4) back-end decompresses these pages into the system's cache; (5) back-end transmit data to front-end. Assume that some of the pages were written to the media (Flash, Disk) with an undetectable media error, and are now corrupt. It is highly unlikely that a corrupt compressed page can be decompressed with no error. The data-verification method is used verify data integrity. If a page is found that cannot be decompressed, the RAID system declares this page as being corrupt, and uses the normal RAID mechanisms to rebuild it (from a mirror or from a parity). Once the page is rebuilt, the corrupt page is fixed and the correct data is returned to the user. If the rebuild fails (for example, if other pages required for the rebuild are corrupt as well), an error is returned to the user. Still, this is better than sending bogus data (to avoid a silent data corruption).
Referring to
Process 1100 scans volatile “dirty queue” (1102) and compress pages ready for destaging (1116) according to the regular system policy, for example. Process 1100 combines compressed pages into one aggregated page (1118) and writes the one aggregated page to a stripe (1122). For example, multiple compressed pages are combined into one aggregated 4 KB page and the 4 KB page is written to a regular 4 KB stripe. The multiple compressed pages are called “sub-pages”. Sub-pages can be of different size. An aggregated page contains fixed length header, describing offsets and sizes of all compressed pages included the aggregated page. The page address includes a field for the physical RAID 4 KB page as well as an index to the sub-page. Process 1100 is completely flexible and it is not limited by any assumption about cells borders. In other words, unlike other methods which pack together similar sub-pages (i.e., only pages compressible to <2 KB size, or only pages compressible to <1 KB size), process 1100 allows packing together pages of different compressibility. For example, it is possible to pack together one page that fits in 3 KB with two pages that each fit in 200B, into a total of a single 4 KB page with 3 sub-pages.
In process 1100 a fairly large queue of dirty pages is maintained. Normally, this queue has at least a few thousands pages. For each of these pages the compressibility score is determined, similar to other methods described herein. A standard packing algorithm selects pages that may be combined into an aggregated page in the most efficient way. Moreover, if process 1100 does not find any matching pages for some compressed page, this page may be just left in the dirty queue until the next time a destaging cycle is run. It should be noted that process 1100 provides more efficient utilization of the RAID pages than all the methods described herein. Additional benefit is that process 1100 avoids changes to the front-end or the back-end code.
In a modification to process 1100 data is kept compressed in cache, thus extending the effective size of the cache. For example, a cache that may contain up to 1100 uncompressed pages, can contain a few thousands of compressed pages. This improves Read and Write performance: Read hit is improved since we can store more pages in cache, and write performance is improved since the write cache can maintain more pages before it needs to slow down writes to the speed of the disk. Also, it may be beneficial to transmit compressed data over the internal network and decompress it on the way to the host.
Referring to
Referring to
The processes described herein (e.g., processes 700, 800, 900 and 1000) are not limited to use with the hardware and software of
The system may be implemented, at least in part, via a computer program product, (e.g., in a non-transitory machine-readable storage medium such as, for example, a non-transitory computer-readable medium), for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers)). Each such program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the programs may be implemented in assembly or machine language. The language may be a compiled or an interpreted language and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. A computer program may be stored on a non-transitory machine-readable medium that is readable by a general or special purpose programmable computer for configuring and operating the computer when the non-transitory machine-readable medium is read by the computer to perform the processes described herein. For example, the processes described herein may also be implemented as a non-transitory machine-readable storage medium, configured with a computer program, where upon execution, instructions in the computer program cause the computer to operate in accordance with the processes. A non-transitory machine-readable medium may include but is not limited to a hard drive, compact disc, flash memory, non-volatile memory, volatile memory, magnetic diskette and so forth but does not include a transitory signal per se.
The processes described herein are not limited to the specific examples described. For example, the processes 700, 800, 900, 1000, 1100 and 1200 are not limited to the specific processing order of
The processing blocks (for example, in the processes 700, 800, 900, 1000, 1100 and 1200) associated with implementing the system may be performed by one or more programmable processors executing one or more computer programs to perform the functions of the system. All or part of the system may be implemented as, special purpose logic circuitry (e.g., an FPGA (field-programmable gate array) and/or an ASIC (application-specific integrated circuit)). All or part of the system may be implemented using electronic hardware circuitry that include electronic devices such as, for example, at least one of a processor, a memory, a programmable logic device or a logic gate.
Elements of different embodiments described herein may be combined to form other embodiments not specifically set forth above. Other embodiments not specifically described herein are also within the scope of the following claims.
This patent application is a divisional application of U.S. patent application Ser. No. 14/230,405, filed on Mar. 31, 2014 and entitled “DATA REDUCTION TECHNIQUES IN A FLASH-BASED KEY/VALUE CLUSTER STORAGE,” which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4164763 | Briccetti et al. | Aug 1979 | A |
4608839 | Tibbals, Jr. | Sep 1986 | A |
4821178 | Levin et al. | Apr 1989 | A |
5319645 | Bassi et al. | Jun 1994 | A |
5537534 | Voigt et al. | Jul 1996 | A |
5539907 | Srivastava et al. | Jul 1996 | A |
5627995 | Miller et al. | May 1997 | A |
5694619 | Konno | Dec 1997 | A |
5710724 | Burrows | Jan 1998 | A |
5732273 | Srivastava et al. | Mar 1998 | A |
5802553 | Robinson et al. | Sep 1998 | A |
5805932 | Kawashima et al. | Sep 1998 | A |
5860137 | Raz et al. | Jan 1999 | A |
5896538 | Blandy et al. | Apr 1999 | A |
5903730 | Asai et al. | May 1999 | A |
5940618 | Blandy et al. | Aug 1999 | A |
5940841 | Schmuck et al. | Aug 1999 | A |
5987250 | Subrahmanyam | Nov 1999 | A |
5999842 | Harrison et al. | Dec 1999 | A |
6182086 | Lomet et al. | Jan 2001 | B1 |
6208273 | Dye et al. | Mar 2001 | B1 |
6226787 | Serra et al. | May 2001 | B1 |
6327699 | Larus et al. | Dec 2001 | B1 |
6353805 | Zahir et al. | Mar 2002 | B1 |
6470478 | Bargh et al. | Oct 2002 | B1 |
6496908 | Kamvysselis et al. | Dec 2002 | B1 |
6519766 | Barritz et al. | Feb 2003 | B1 |
6553464 | Kamvysselis et al. | Apr 2003 | B1 |
6624761 | Fallon | Sep 2003 | B2 |
6640280 | Kamvysselis et al. | Oct 2003 | B1 |
6643654 | Patel et al. | Nov 2003 | B1 |
6654948 | Konuru et al. | Nov 2003 | B1 |
6658471 | Berry et al. | Dec 2003 | B1 |
6658654 | Berry et al. | Dec 2003 | B1 |
6691209 | O'Connell | Feb 2004 | B1 |
6801914 | Barga et al. | Oct 2004 | B2 |
6820218 | Barga et al. | Nov 2004 | B1 |
6862632 | Halstead et al. | Mar 2005 | B1 |
6870929 | Greene | Mar 2005 | B1 |
6883018 | Meiri et al. | Apr 2005 | B1 |
6886164 | Meiri | Apr 2005 | B2 |
6898685 | Meiri et al. | May 2005 | B2 |
6910075 | Mamhak et al. | Jun 2005 | B2 |
6938122 | Meiri et al. | Aug 2005 | B2 |
6944726 | Yoder et al. | Sep 2005 | B2 |
6968369 | Veprinsky et al. | Nov 2005 | B2 |
6976139 | Halstead et al. | Dec 2005 | B2 |
7000086 | Meiri et al. | Feb 2006 | B2 |
7024525 | Yoder et al. | Apr 2006 | B2 |
7032228 | McGillis et al. | Apr 2006 | B1 |
7051176 | Meiri et al. | May 2006 | B2 |
7054883 | Meiri et al. | May 2006 | B2 |
7099797 | Richard | Aug 2006 | B1 |
7113945 | Moreshet et al. | Sep 2006 | B1 |
7114033 | Longinov et al. | Sep 2006 | B2 |
7143410 | Coffman et al. | Nov 2006 | B1 |
7174423 | Meiri et al. | Feb 2007 | B2 |
7190284 | Dye et al. | Mar 2007 | B1 |
7197616 | Meiri et al. | Mar 2007 | B2 |
7228456 | Lecrone et al. | Jun 2007 | B2 |
7240116 | Mamhak et al. | Jul 2007 | B2 |
7251663 | Smith | Jul 2007 | B1 |
7292969 | Aharoni et al. | Nov 2007 | B1 |
7315795 | Homma | Jan 2008 | B2 |
7376651 | Moreshet et al. | May 2008 | B2 |
7380082 | Meiri et al. | May 2008 | B2 |
7383385 | Meiri et al. | Jun 2008 | B2 |
7383408 | Meiri et al. | Jun 2008 | B2 |
7386668 | Longmov et al. | Jun 2008 | B2 |
7389497 | Edmark et al. | Jun 2008 | B1 |
7392360 | Aharoni et al. | Jun 2008 | B1 |
7409470 | Halstead et al. | Aug 2008 | B2 |
7421681 | DeWitt, Jr. et al. | Sep 2008 | B2 |
7430589 | Veprinsky et al. | Sep 2008 | B2 |
7552125 | Evans | Jun 2009 | B1 |
7574587 | DeWitt, Jr. et al. | Aug 2009 | B2 |
7577957 | Kamvysselis et al. | Aug 2009 | B1 |
7613890 | Meiri | Nov 2009 | B1 |
7617372 | Bjornsson et al. | Nov 2009 | B1 |
7672005 | Hobbs et al. | Mar 2010 | B1 |
7693999 | Park | Apr 2010 | B2 |
7702871 | Arnon et al. | Apr 2010 | B1 |
7714747 | Fallon | May 2010 | B2 |
7814218 | Knee et al. | Oct 2010 | B1 |
7827136 | Wang et al. | Nov 2010 | B1 |
7870195 | Meiri | Jan 2011 | B1 |
7898442 | Sovik | Mar 2011 | B1 |
7908436 | Srinivasan et al. | Mar 2011 | B1 |
7962664 | Gotch et al. | Jun 2011 | B2 |
8046545 | Meiri et al. | Oct 2011 | B2 |
8078813 | LeCrone et al. | Dec 2011 | B2 |
8200923 | Healey et al. | Jun 2012 | B1 |
8332687 | Natanzon et al. | Dec 2012 | B1 |
8335771 | Natanzon et al. | Dec 2012 | B1 |
8335899 | Meiri et al. | Dec 2012 | B1 |
8468180 | Meiri et al. | Jun 2013 | B1 |
8478951 | Healey et al. | Jul 2013 | B1 |
8560926 | Yeh | Oct 2013 | B2 |
8578204 | Ortenberg et al. | Nov 2013 | B1 |
8600943 | Fitzgerald et al. | Dec 2013 | B1 |
8677087 | Meiri et al. | Mar 2014 | B2 |
8694700 | Natanzon et al. | Apr 2014 | B1 |
8706959 | Arnon et al. | Apr 2014 | B1 |
8719497 | Don et al. | May 2014 | B1 |
8732124 | Arnon et al. | May 2014 | B1 |
8782357 | Halstead et al. | Jul 2014 | B2 |
8812595 | Meiri et al. | Aug 2014 | B2 |
8825964 | Soplca et al. | Sep 2014 | B1 |
8838849 | Meiri et al. | Sep 2014 | B1 |
8862546 | Natanzon et al. | Oct 2014 | B1 |
8880788 | Sundaram | Nov 2014 | B1 |
8914596 | Lecrone et al. | Dec 2014 | B2 |
8966211 | Arnon et al. | Feb 2015 | B1 |
8977826 | Meiri et al. | Mar 2015 | B1 |
9002904 | Meiri et al. | Apr 2015 | B1 |
9009437 | Bjornsson et al. | Apr 2015 | B1 |
9026492 | Shorey et al. | May 2015 | B1 |
9026696 | Natanzon et al. | May 2015 | B1 |
9037816 | Halstead et al. | May 2015 | B1 |
9037822 | Meiri et al. | May 2015 | B1 |
9100343 | Riordan et al. | Aug 2015 | B1 |
9104326 | Frank et al. | Aug 2015 | B2 |
9110693 | Meiri et al. | Aug 2015 | B1 |
9208162 | Hallak et al. | Dec 2015 | B1 |
9270592 | Sites | Feb 2016 | B1 |
9286003 | Hallak et al. | Mar 2016 | B1 |
9304889 | Chen et al. | Apr 2016 | B1 |
9317362 | Khan | Apr 2016 | B2 |
9323750 | Natanzon et al. | Apr 2016 | B2 |
9342465 | Meiri | May 2016 | B1 |
9378106 | Ben-Moshe et al. | Jun 2016 | B1 |
9396243 | Halevi et al. | Jul 2016 | B1 |
9418131 | Halevi et al. | Aug 2016 | B1 |
9483355 | Meiri et al. | Nov 2016 | B1 |
9524220 | Veprinsky et al. | Dec 2016 | B1 |
9558083 | LeCrone et al. | Jan 2017 | B2 |
9606739 | LeCrone et al. | Mar 2017 | B1 |
9606870 | Meiri et al. | Mar 2017 | B1 |
9753663 | LeCrone et al. | Sep 2017 | B1 |
9762460 | Pawlowski et al. | Sep 2017 | B2 |
9785468 | Mitchell et al. | Oct 2017 | B2 |
9959063 | Meiri et al. | May 2018 | B1 |
9959073 | Meiri | May 2018 | B1 |
10007466 | Meiri et al. | Jun 2018 | B1 |
10025843 | Meiri et al. | Jul 2018 | B1 |
10055161 | Meiri et al. | Aug 2018 | B1 |
10095428 | Meiri et al. | Oct 2018 | B1 |
10152527 | Meiri et al. | Dec 2018 | B1 |
20010054131 | Alvarez, II et al. | Dec 2001 | A1 |
20020056031 | Skiba et al. | May 2002 | A1 |
20030023656 | Hutchison et al. | Jan 2003 | A1 |
20030126122 | Bosley et al. | Jul 2003 | A1 |
20030131184 | Kever | Jul 2003 | A1 |
20030145251 | Cantrill | Jul 2003 | A1 |
20040267835 | Zwilling et al. | Dec 2004 | A1 |
20050039171 | Avakian et al. | Feb 2005 | A1 |
20050071579 | Luick | Mar 2005 | A1 |
20050125626 | Todd | Jun 2005 | A1 |
20050144416 | Lin | Jun 2005 | A1 |
20050171937 | Hughes et al. | Aug 2005 | A1 |
20050193084 | Todd et al. | Sep 2005 | A1 |
20050278346 | Shang et al. | Dec 2005 | A1 |
20060031653 | Todd et al. | Feb 2006 | A1 |
20060031787 | Ananth et al. | Feb 2006 | A1 |
20060070076 | Ma | Mar 2006 | A1 |
20060106747 | Bartfai et al. | May 2006 | A1 |
20060123212 | Yagawa | Jun 2006 | A1 |
20060242442 | Armstrong et al. | Oct 2006 | A1 |
20070208788 | Chakravarty et al. | Sep 2007 | A1 |
20080082736 | Chow | Apr 2008 | A1 |
20080163215 | Jiang et al. | Jul 2008 | A1 |
20080178050 | Kern et al. | Jul 2008 | A1 |
20080288739 | Bamba et al. | Nov 2008 | A1 |
20090006745 | Cavallo et al. | Jan 2009 | A1 |
20090030986 | Bates | Jan 2009 | A1 |
20090089483 | Tanaka et al. | Apr 2009 | A1 |
20090172273 | Piszczek et al. | Jul 2009 | A1 |
20090222596 | Flynn et al. | Sep 2009 | A1 |
20090248986 | Citron et al. | Oct 2009 | A1 |
20090319996 | Shafi et al. | Dec 2009 | A1 |
20100042790 | Mondal et al. | Feb 2010 | A1 |
20100161884 | Kurashige | Jun 2010 | A1 |
20100180145 | Chu | Jul 2010 | A1 |
20100199066 | Artan et al. | Aug 2010 | A1 |
20100205330 | Noborikawa et al. | Aug 2010 | A1 |
20100223619 | Jaquet et al. | Sep 2010 | A1 |
20100257149 | Cognigni et al. | Oct 2010 | A1 |
20100287427 | Kim et al. | Nov 2010 | A1 |
20110078494 | Maki et al. | Mar 2011 | A1 |
20110083026 | Mikami et al. | Apr 2011 | A1 |
20110099342 | Ozdemir | Apr 2011 | A1 |
20110126045 | Bennett | May 2011 | A1 |
20110185105 | Yano et al. | Jul 2011 | A1 |
20110202744 | Kulkarni et al. | Aug 2011 | A1 |
20110225122 | Denuit et al. | Sep 2011 | A1 |
20120054472 | Altman et al. | Mar 2012 | A1 |
20120124282 | Frank et al. | May 2012 | A1 |
20120278793 | Jalan et al. | Nov 2012 | A1 |
20120290546 | Smith et al. | Nov 2012 | A1 |
20120290798 | Huang | Nov 2012 | A1 |
20120304024 | Rohleder et al. | Nov 2012 | A1 |
20130031077 | Liu et al. | Jan 2013 | A1 |
20130054524 | Anglin et al. | Feb 2013 | A1 |
20130073527 | Bromley | Mar 2013 | A1 |
20130110783 | Wertheimer et al. | May 2013 | A1 |
20130111007 | Hoffmann et al. | May 2013 | A1 |
20130138607 | Bashyam et al. | May 2013 | A1 |
20130151759 | Shim et al. | Jun 2013 | A1 |
20130198854 | Erway et al. | Aug 2013 | A1 |
20130227346 | Lee | Aug 2013 | A1 |
20130246724 | Furuya | Sep 2013 | A1 |
20130265883 | Henry et al. | Oct 2013 | A1 |
20130318051 | Kumar et al. | Nov 2013 | A1 |
20130339533 | Neerincx et al. | Dec 2013 | A1 |
20140032964 | Neerincx et al. | Jan 2014 | A1 |
20140082261 | Cohen | Mar 2014 | A1 |
20140136759 | Sprouse et al. | May 2014 | A1 |
20140143206 | Pittelko | May 2014 | A1 |
20140181119 | Chiueh et al. | Jun 2014 | A1 |
20140195484 | Wang et al. | Jul 2014 | A1 |
20140380282 | Ravindranath Sivalingam et al. | Dec 2014 | A1 |
20150006910 | Shapiro | Jan 2015 | A1 |
20150088823 | Chen et al. | Mar 2015 | A1 |
20150088945 | Kruus et al. | Mar 2015 | A1 |
20150134880 | Danilak et al. | May 2015 | A1 |
20150161194 | Provenzano et al. | Jun 2015 | A1 |
20150193342 | Ohara | Jul 2015 | A1 |
20160004642 | Sugimoto | Jan 2016 | A1 |
20160034692 | Singler | Feb 2016 | A1 |
20160246678 | Galbraith et al. | Aug 2016 | A1 |
Number | Date | Country |
---|---|---|
1804157 | Jul 2007 | EP |
WO 2010019596 | Feb 2010 | WO |
WO 2010040078 | Apr 2010 | WO |
WO 2012066528 | May 2012 | WO |
Entry |
---|
U.S. Appl. No. 14/034,981, filed Sep. 24, 2013, Halevi et al. |
U.S. Appl. No. 14/037,577, filed Sep. 26, 2013, Ben-Moshe et al. |
U.S. Appl. No. 14/230,405, filed Mar. 31, 2014, Meiri et al. |
U.S. Appl. No. 15/001,784, filed Jan. 20, 2016, Meiri et al. |
U.S. Appl. No. 14/230,414, filed Mar. 31, 2014, Meiri. |
U.S. Appl. No. 14/317,449, filed Jun. 27, 2014, Halevi et al. |
U.S. Appl. No. 14/494,895, filed Sep. 24, 2014, Meiri et al. |
U.S. Appl. No. 14/494,899, filed Sep. 24, 2014, Chen et al. |
PCT International Search Report and Written Opinion dated Dec. 1, 2011 for PCT Application No. PCT/IL2011/000692; 11 Pages. |
PCT International Preliminary Report dated May 30, 2013 for PCT Patent Application No. PCT/IL2011/000692; 7 Pages. |
U.S. Appl. No. 12/945,915; 200 Pages. |
U.S. Appl. No. 12/945,915; 108 Pages. |
U.S. Appl. No. 12/945,915; 67 Pages. |
Notice of Allowance dated Apr. 13, 2015 corresponding to U.S. Appl. No. 14/037,511; 11 Pages. |
Non-Final Office Action dated May 11, 2015 corresponding to U.S. Appl. No. 14/037,626; 13 Pages. |
Response to Office Action dated May 11, 2015 corresponding to U.S. Appl. No. 14/037,626; Response filed on Jul. 20, 2015; 10 Pages. |
Notice of Allowance dated Oct. 26, 2015 corresponding to U.S. Appl. No. 14/037,626; 12 Pages. |
Office Action dated Jul. 22, 2015 corresponding to U.S. Appl. No. 14/034,981; 28 Pages. |
Response to Office Action dated Jul. 22, 2015 corresponding to U.S. Appl. No. 14/034,981; Response filed on Dec. 22, 2015; 14 Pages. |
Office Action dated Sep. 1, 2015 corresponding to U.S. Appl. No. 14/230,414; 13 Pages. |
Response to Office Action dated Sep. 1, 2015 corresponding to U.S. Appl. No. 14/230,414; Response filed on Jan. 14, 2016; 10 Pages. |
Restriction Requirement dated Sep. 24, 2015 corresponding to U.S. Appl. No. 14/230,405; 8 Pages. |
Response to Restriction Requirement dated Sep. 24, 2015 corresponding to U.S. Appl. No. 14/230,405;Response filed Oct. 6, 2015; 1 Page. |
Office Action dated Dec. 1, 2015 corresponding to U.S. Appl. No. 14/230,405;17 Pages. |
Nguyen et al., “B+ Hash Tree: Optimizing Query Execution Times for on-Disk Semantic Wed Data Structures;” Proceedings of the 6th International Workshop on Scalable Semantic Web Knowledge Base Systems; Shanghai, China, Nov. 8, 2010; 16 Pages. |
U.S. Appl. No. 14/979,890, filed Dec. 28, 2015, Meiri et al. |
Office Action dated Feb. 4, 2016 for U.S. Appl. No. 14/037,577; 26 Pages. |
Notice of Allowance dated Feb. 10, 2016 for U.S. Appl. No. 14/494,899; 19 Pages. |
U.S. Appl. No. 15/085,168, filed Mar. 30, 2016, Meiri et al. |
U.S. Appl. No. 15/085,172, filed Mar. 30, 2016, Meiri. |
U.S. Appl. No. 15/085,181, filed Mar. 30, 2016, Meiri et al. |
Notice of Allowance dated Feb. 26, 2016 corresponding to U.S. Appl. No. 14/230,414; 8 Pages. |
Final Office Action dated Apr. 6, 2016 corresponding to U.S. Appl. No. 14/034,981; 38 Pages. |
Response filed on May 2, 2016 to the Non-Final Office Action dated Dec. 1, 2015; for U.S. Appl. No. 14/230,405; 8 pages. |
Response filed on May 2, 2016 to the Non-Final Office Action dated Feb. 4, 2016, for U.S. Appl. No. 14/037,577; 10 pages. |
Notice of Allowance dated May 20, 2016 corresponding to U.S. Appl. No. 14/037,577; 19 Pages. |
U.S. Appl. No. 15/196,674, filed Jun. 29, 2016, Kleiner et al. |
U.S. Appl. No. 15/196,427, filed Jun. 29, 2016, Shveidel. |
U.S. Appl. No. 15/196,374, filed Jun. 29, 2016, Shveidel et al. |
U.S. Appl. No. 15/196,447, filed Jun. 29, 2016, Shveidel et al. |
U.S. Appl. No. 15/196,472, filed Jun. 29, 2016, Shveidel. |
Response to U.S. Final Office Action dated Apr. 6, 2016 corresponding to U.S. Appl. No. 14/034,981; Response filed on Jun. 16, 2016; 11 Pages. |
Notice of Allowance dated Jun. 29, 2016 corresponding to U.S. Appl. No. 14/034,981; 14 Pages. |
U.S. Final Office Action dated Jul. 29, 2016 corresponding to U.S. Appl. No. 14/230,405; 29 Pages. |
Notice of Allowance dated Jun. 6, 2016 corresponding to U.S. Appl. No. 14/317,449; 43 Pages. |
U.S. Final Office Action dated Feb. 22, 2017 for U.S. Appl. No. 15/001,784; 15 Pages. |
U.S. Office Action dated Sep. 22, 2016 corresponding to U.S. Appl. No. 15/001,784; 27 Pages. |
Response to U.S. Office Action dated Jul. 29, 2016 corresponding to U.S. Appl. No. 14/230,405; Response filed on Oct. 6, 2016; 9 Pages. |
Response to U.S. Final Office Action dated Nov. 16, 2016 corresponding to U.S. Appl. No. 14/230,405; Response filed on Dec. 1, 2016; 8 Pages. |
Response to U.S. Office Action dated Sep. 22, 2016 corresponding to U.S. Appl. No. 15/001,784; Response filed on Dec. 8, 2016; 16 Pages. |
Final Office Action dated Nov. 16, 2016 from U.S. Appl. No. 14/230,405; 23 Pages. |
Notice of Allowance dated Jan. 25, 2017 for U.S. Appl. No. 14/230,405; 8 Pages. |
U.S. Non-Final Office Action dated Jul. 6, 2017 for U.S. Appl. No. 14/494,895; 36 Pages. |
Request for Continued Examination dated Dec. 4, 2017 for U.S. Appl. No. 15/001,784; 3 Pages. |
U.S. Non-Final Office Action dated Dec. 1, 2017 for U.S. Appl. No. 14/979,890; 10 Pages. |
Notice of Allowance dated Nov. 28, 2017 for U.S. Appl. No. 15/001,784; 9 Pages. |
U.S. Final Office Action dated Nov. 2, 2017 for U.S. Appl. No. 14/494,895; 12 Pages. |
Response to U.S. Non-Final Office Action dated Jul. 6, 2017 for U.S. Appl. No. 14/494,895; Response filed Oct. 3, 2017; 10 Pages. |
U.S. Notice of Allowance dated Feb. 21, 2018 for U.S. Appl. No. 15/196,427; 31 Pages. |
Final Office Action dated Jul. 5, 2018 for U.S. Appl. No. 14/979,890; 15 pages. |
Notice of Allowance dated Oct. 11, 2018 for U.S. Appl. No. 14/979,890; 9 Pages. |
Notice of Allowance dated May 8, 2018 for U.S. Appl. No. 15/001,784; 9 Pages. |
U.S. Non-Final Office Action dated Dec. 29, 2017 corresponding to U.S. Appl. No. 15/196,674; 34 Pages. |
U.S. Non-Final Office Action dated Nov. 1, 2017 corresponding to U.S. Appl. No. 15/196,374; 64 Pages. |
Response to U.S. Non-Final Office Action dated Nov. 1, 2017 corresponding to U.S. Appl. No. 15/196,374; Response filed Jan. 30, 2018; 14 Pages. |
U.S. Non-Final Office Action dated Dec. 11, 2017 corresponding to U.S. Appl. No. 15/196,447; 54 Pages. |
U.S. Non-Final Office Action dated Jan. 8, 2018 corresponding to U.S. Appl. No. 15/196,472; 16 Pages. |
U.S. Appl. No. 16/050,247, filed Jul. 31, 2018, Schneider et al. |
U.S. Appl. No. 16/177,782, filed Nov. 1, 2018, Hu et al. |
U.S. Appl. No. 16/264,825, filed Feb. 1, 2019, Chen et al. |
U.S. Appl. No. 16/263,414, filed Jan. 31, 2019, Meiri et al. |
U.S. Appl. No. 15/076,775, filed Mar. 22, 2016, Chen et al. |
U.S. Appl. No. 15/076,946, filed Mar. 22, 2016, Meiri. |
U.S. Appl. No. 15/085,188, filed Mar. 30, 2016, Meiri et al. |
U.S. Appl. No. 15/499,297, filed Apr. 27, 2017, Kucherov et al. |
U.S. Appl. No. 15/499,303, filed Apr. 27, 2017, Kucherov et al. |
U.S. Appl. No. 15/499,226, filed Apr. 27, 2017, Meiri et al. |
U.S. Appl. No. 15/499,199, filed Apr. 27, 2017, Stronge et al. |
U.S. Appl. No. 15/797,329, filed Oct. 30, 2017, Parasnis et al. |
U.S. Appl. No. 15/971,153, filed May 4, 2018, Meiri et al. |
U.S. Appl. No. 15/971,310, filed May 4, 2018, Kucherov et al. |
U.S. Appl. No. 15/971,325, filed May 4, 2018, Kucherov et al. |
U.S. Appl. No. 15/971,445, filed May 4, 2018, Kucherov et al. |
U.S. Non-Final Office Action dated Dec. 17, 2019 for U.S. Appl. No. 15/885,290; 17 Pages. |
Response to U.S. Non-Final Office Action dated Dec. 17, 2019 for U.S. Appl. No. 15/885,290; Response filed Apr. 6, 2020; 93 Pages. |
U.S. Notice of Allowance dated Apr. 27, 2020 for U.S. Appl. No. 15/885,290; 21 Pages. |
Abdel-Ghaffar et al., “Optimal Disk Allocation for Partial Match Queries;” ACM Transactions on Database Systems, vol. 18, No. 1; Mar. 1993; pp. 132-156; 25 Pages. |
Number | Date | Country | |
---|---|---|---|
Parent | 14230405 | Mar 2014 | US |
Child | 15001789 | US |