The present disclosure generally relates to systems and networks for distributed storage of data over a plurality of nodes, and more particularly to processes and configurations for using storage codes with a flexible number of nodes.
In distributed systems, error-correcting codes are ubiquitous to achieve high efficiency and reliability. However, most codes have a fixed redundancy level. The number of failures varies over time, and as such, the fixed redundancy overcompensates for errors in practical systems and can result in latency. For example, when the number of failures is smaller than the designed redundancy level, the redundant storage nodes are not used efficiently. There is a desire for improvements in storage codes processes and configurations.
Prior configurations have been proposed for error correction to minimize a cost function such as a linear combination of bandwidth, delay or the number of hops. Similarly, other processes have been proposed to reduce the bandwidth or to achieve the optimal field size. One drawback of existing processes may be the requirement of identifying the set of available nodes prior to computing. There is a desire for improvements which address one or more deficiencies of existing systems and improve reconstruction of data.
Disclosed and described herein are systems, methods and configurations for using flexible storage codes to reconstruct data. According to embodiments, a method for reconstructing data using flexible storage codes includes determining, by a device, a node failure for received information using a storage code for the information, the received information received from at least one of a plurality of storage nodes, and determining, by the device, a number of nodes and a number of symbols of a flexible storage code for error correction, wherein the flexible storage code is generated using the storage code for the information. The method includes reconstructing, by the device, the received information using the determined number of nodes and the number of symbols the of flexible storage code.
In one embodiment, determining a node failure includes identifying at least one node failure from the storage code for the information.
In one embodiment, the plurality storage nodes are in a distributed network and information is received by the device with symbols encoded over a finite field into a number of nodes, the flexible storage code configured for a flexible number of nodes and symbols.
In one embodiment, the number of nodes and the number of symbols of the flexible storage code is determined based on a number of node failures.
In one embodiment, the storage code is at least one of a maximum distance separable (MDS) storage code and a minimum storage regenerating (MSR) storage code.
In one embodiment, the flexible storage code is configured as a flexible locally recoverable code (LRC), wherein flexible LRC is an array code for finite field of information, the array generated based on a code worth length n, dimension k, and code word symbol l, and wherein the code is parametrized by {(Rj, kj, j): 1≤j≤α} that satisfies kjj=k, 1≤j≤α,k1>k2> . . . kα=k, α=, and
for single node failure recovery from a subset of nodes of the storage code.
In one embodiment, the flexible storage code is configured as a flexible partial maximum distance separable code (PMDSC) wherein, wherein flexible PMDSC is an array code for finite field of information, the array generated based on a code worth length n, dimension k, code word symbol l, and extra symbol failures s, and wherein the code is parametrized by a set of {(Rj, kj, j):1≤j≤a}) satisfying kjj=k, 1≤j≤α,k1>k2> . . . >kα=k, α=,
and Rj=kj such that hen j symbols are accessed in each node.
In one embodiment, the flexible storage code is configured as a flexible minimum storage regenerating code (MSRC), wherein repair bandwidth is a minimum amount of transmission required to repair a single node failure from all remaining nodes normalized by size of the node, wherein the repair bandwidth is bounded by the minimum storage regenerating (MSR) bound as
based on a code worth length n, and dimension k.
In one embodiment, reconstructing the received information with the number of nodes and the number of symbols of a flexible storage code includes using a subarray of nodes and symbols of the storage code for the information.
In one embodiment, the method also includes updating the number of nodes and number of symbols of the flexible storage code for error correction of additional information.
Embodiments are also directed to a device configured to reconstruct data using flexible storage codes. The device includes a receiver configured to receive information from a distributed network and a controller. The controller configured to determine a node failure for received information using a storage code for the information, the received information received from at least one of a plurality of storage nodes, and determine a number of nodes and a number of symbols of a flexible storage code for error correction, wherein the flexible storage code is generated using the storage code for the information. The controller also configured to reconstruct the received information using the determined number of nodes and the number of symbols the of flexible storage code.
Other aspects, features, and techniques will be apparent to one skilled in the relevant art in view of the following detailed description of the embodiments.
The features, objects, and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout and wherein:
According to embodiments, a system, methods and device configurations are provided for flexible storage codes that make it possible to recover the entire information through accessing a flexible number of nodes. According to embodiments, flexible storage codes are a class of error-correcting codes that can recover information from a flexible number of storage nodes. As a result, processes and devices make a better use of the available storage nodes in the presence of unpredictable node failures and reduce the data access latency. According to embodiments, use of flexible storage codes may include use of a framework for accessing and correction of data.
In distributed systems, error-correcting codes are ubiquitous to achieve high efficiency and reliability. Unlike codes that have a fixed redundancy level, embodiments provide solutions for systems and processes where the number of failures varies over time. When the number of failures is smaller than the designed redundancy level, the redundant storage nodes are not used efficiently. Embodiments provide flexible storage codes that make it possible to recover the entire information through accessing a flexible number of nodes.
Methods and systems are thus described in the various embodiments for using flexible storage codes. Methods and systems described herein may apply to distributed storage system in which storage infrastructure can split data across multiple physical servers, and often across more than one data center. A cluster of storage units may provide information to one or more devices, with a mechanism for data synchronization and coordination between cluster nodes. In the event of a data error, such as one or more nodes of the system failing to provide data, flexible correction codes are provided that can account for the failure of nodes such that the number of symbols and/or parities may be selected for correction.
According to embodiments, multiple constructions of Flexible MDS codes may be utilized for different application scenarios including error-correction codes, universally decodable matrices, secret sharing and private information retrieval. Embodiments provide a framework that can produce flexible storage codes for different code families including important types of storage codes, such as codes that efficiently recover from a single node failure, or codes that correct mixed types of node and symbol failures. Embodiments of the disclosure include a framework for flexible codes that can generate flexible storage codes given a construction of fixed (non-flexible) storage code.
Flexible LRC (locally recoverable) codes allow information reconstruction from a variable number of available nodes while maintaining the locality property, providing efficient single node recovery. According to embodiments, for an (n,k, , r) flexible LRC code parametrized by {(Rj,kj,j): 1≤j≤α} that satisfies kjj=k, 1≤j≤α, k1>k2> . . . kα=k,α=, and
single node failure can be recovered from a subset of r nodes, while the total information is reconstructed by accessing j symbols in Rj nodes. Embodiments provide code constructions based on the optimal LRC code construction.
Embodiments can be applied to using Flexible PMDS (partial MDS) codes are designed to tolerate a flexible number of node failures and a given number of extra symbol failures, desirable for solid-state drives due to the presence of mixed types of failures. We provide an (n,k, , s) with a set of satisfying kjj=k, 1≤j≤α, k1>k2> . . . >kα=k, α= and Rj=kj such that when j symbols are accessed in each node, we can tolerate n-Rj failures and s extra symbol failures. Embodiments, construct flexible codes from the PMDS code.
Embodiments can be applied to using Flexible MSR (minimum storage regenerating) codes as a type of flexible MDS codes such that a single node failure is recovered by downloading the minimum amount of information from the available nodes. Both vector and scalar codes are obtained by applying our flexible code framework to the MSR codes.
The disclosure provides an analysis of benefits including latency analysis for flexible storage codes. It is demonstrated that flexible storage codes according to embodiments have a lower latency compared to the corresponding fixed codes.
As used herein, the terms “a” or “an” shall mean one or more than one. The term “plurality” shall mean two or more than two. The term “another” is defined as a second or more. The terms “including” and/or “having” are open ended (e.g., comprising). The term “or” as used herein is to be interpreted as inclusive or meaning any one or any combination. Therefore, “A, B or C” means “any of the following. A; B; C; A and B; A and C; B and C; A, B and C”. An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.
Reference throughout this document to “one embodiment,” “certain embodiments,” “an embodiment,” or similar term means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of such phrases in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner on one or more embodiments without limitation.
As used herein, flexible storage codes may function to provide error correction. As further discussed herein the number of symbols and nodes used in a correction code may be based on the error. By way of example, two symbols may be accessed in three nodes, or three symbols may be access in two nodes depending on the error. Processes and device configurations described herein are configured to determine the number of symbols and nodes used in a correction.
Process 200 may be performed by a device when information is received in a distributed network. Embodiments characterizing a distributed network may include information received from one or more sources or storage nodes. According to embodiments, operations may be performed for data received from one or more storage sources or devices. According to yet other embodiments, operations may be based on information stored with a storage code, such as least one of a maximum distance separable (MDS) storage code and a minimum storage regenerating (MSR) storage code.
Process 200 may be initiated by a device determining that an error correction is required for information stored in a distributed network at block 205. An error correction may be identified based on the failure or one or more nodes of the distributed network. Process 200 may be performed after information is received. Process 200 may optionally include receiving data, information, at block 220. The device may identify that an error correction is required based on data received. Data received can include transmitted information and storage code information for the transmitted information. In a distributed network, data may be received from one or more storage nodes. Similarly, transmitted information may be transmitted in packets or groupings, such that an information stream may include one or more blocks or nodes of data. Determinations of node failure may be determined when data is not received from a storage device. Similarly, node failure may be based on a block of data that is part of a stream is not received from a storage device. Similarly, node failure may be determined when data is received with one or more errors. Embodiments can utilize storage code to recreate and/or regenerate damaged or missing data. Data with one or more errors, due to transmission errors, detection errors, etc., may be identified using one or more of symbols and parities in storage code associated with the data. With data transmission, speed and elimination of redundancy can improved device performance. Embodiments allow for operations to use flexible storage codes, such that the number of symbols and nodes of a storage code may be selected in order to reconstruct data. According to embodiments, process 200 can reconstruct data using one or more flexible storage codes described herein.
At block 205, process 200 may be performed by a device to determine a node failure for received information using a storage code for the information. The received information may be received from at least one of a plurality of storage nodes. The received information may be received in one or more blocks of code, the received blocks of code may be considered nodes for purposes of the disclosure. Node failure may be determined by identifying at least one node failure from the storage code using the storage code information. According to embodiments, one or more symbols encoded with the received information may be used to identify a node failure and/or node error. A plurality of storage nodes of a distributed network may provide information that is received by the device with symbols encoded over a finite field into a number of nodes. According to embodiments, information that can be broken up into blocks to include, each block being a node of information and where each block includes a storage code as symbols. Symbols of the blocks may also include parities to allow for error correction. According to embodiments and as described herein, a flexible storage code is used to identify and operate with a portion of transmitted symbols to provide a flexible number of nodes and symbols. Unlike error correction codes that use a fixed number of symbols, the fixed number of symbols being a subset of available symbols, processes described herein determine the number of symbols and nodes to use in received information for error correction. Determining the number and nodes of storage code may be based on the number of failures for received information.
At block 210, process 200 includes determining a number of nodes and number of symbols of a flexible storage code for error correction. According to embodiments, a system providing information may encode storage codes as information symbols over a finite field into a number of nodes. Nodes of the storage code relate to one or more nodes of data that are received of the information storage code. Storage code may be arranged as an array of codes including parameters for code word length, dimension, and size of each node. According to embodiments, the number of nodes and the number of symbols is determined based on number of failures and based on values of storage code information.
Process 200 illustrates use of flexible storage codes, as a class of error-correcting codes that can recover information from a flexible number of storage nodes. According to embodiments, flexible storage codes relates to the use of elements within a storage code. By minimizing and/or selecting parameters of storage code, one or more of efficiency and speed of error correction may be improved. As a result, process 200 can make a better use of the available storage nodes in the presence of unpredictable node failures and reduce the data access latency. Process 200, and embodiments herein, may characterize a storage system that encodes k information symbols over a finite field into n nodes, each of size symbols. The code is parameterized by a set of tuples {(Rj,kj,j): 1≤j≤α} satisfying k11=k22= . . . =kαα and k1>k2> . . . k=α, α=, such that the information symbols can be reconstructed from any R; nodes, each node accessing j symbols. In other words, the code allows a flexible number of nodes for decoding to accommodate the variance in the data access time of the nodes. Code constructions are presented for different storage scenarios, including LRC (locally recoverable) codes, PMDS (partial MDS) codes, and MSR (minimum storage regenerating) codes. As discussed herein analysis is provided for the latency of accessing information. According to other embodiments, the flexible storage code of process 200 may be configured as a code construction, such as one or more of a flexible locally recoverable code (LRC), flexible partial maximum distance separable code (PMDSC), and a flexible minimum storage regenerating code (MSRC).
In one embodiment, the flexible storage code is configured as a flexible locally recoverable code (LRC), wherein flexible LRC is an array code for finite field of information, the array generated based on a code worth length n, dimension k, and code word symbol l, and wherein the code is parametrized by {(Rj,kj,j): 1≤j≤α} that satisfies kjj=k, 1≤j≤α, k1>k2> . . . >kα=k,α=, and
for single node failure recovery from a subset of nodes of the storage code.
In one embodiment, the flexible storage code is configured as a flexible partial maximum distance separable code (PMDSC) wherein, wherein flexible PMDSC is an array code for finite field of information, the array generated based on a code worth length n, dimension k, code word symbol l, and extra symbol failures s, and wherein the code is parametrized by a set of {Rj,Kj,j): 1≤j≤a} satisfying kjj=k, 1≤j≤α, k1>k2> . . . >kα=k, α= and Rj=kj such that hen j symbols are accessed in each node.
In one embodiment, the flexible storage code is configured as a flexible minimum storage regenerating code (MSRC), wherein repair bandwidth is a minimum amount of transmission required to repair a single node failure from all remaining nodes normalized by size of the node, wherein the repair bandwidth is bounded by the minimum storage regenerating (MSR) bound as
based on a code worth length n, and dimension k.
At block 215, process 200 includes reconstructing information using the flexible storage code and determined numbers of nodes and symbols of the flexible storage code using a subarray of nodes and symbols of the storage code for the information. Process 200 may include utilization of the number of symbols accessed and the number of nodes accessed. By way of example, two symbols may be accessed in three nodes or three symbols may be accessed in two nodes as is shown in
Process 200 may include determining one or more additional nodes or symbols to use for decoding of other information. By way of example, after correction at block 215, process 200 may include detection of a node failure at block 205, such that particulars of the failure and number of failed nodes can be used for additional and/or subsequent error correction. As such, process 200 may include updating the number of nodes and number of symbols of the flexible storage code for error correction of additional information.
At optional block 225, process 200 may output of reconstructed information. Reconstructed information may provide the information requested from a distributed network, even in the event of a storage node failure or error in transmission.
Communication module 315 may be configured to receive information from a distributed network. According to one embodiment, controller 305 is configured to determine that error correction is required for the information, determine a number of nodes and number of symbols of a flexible storage code, and reconstruct the information using the flexible storage code and determined numbers of nodes and symbols.
Motivated by reducing the data access latency, embodiments provide flexible storage codes. According to embodiments, a flexible storage codes is an (n,k,) code that is parameterized by a given integer α and a set of tuples {(Rj,kj,j): 1≤j≤α} that satisfies kjj=k, 1≤j≤α, k1>k2> . . . >kα=k, α= and if we take j particular coordinates of each code word symbol, denoted by (Cm1,i,Cm2,i, . . . Cm,i)T,i ∈[n] where [n] is the set of integers smaller or equal to n, we can recover the entire information from any Rj nodes.
For example, flexible maximum distance separable (MDS) codes are codes satisfying the singleton bound for each kj, namely, Rj=kj, 1≤j≤α.
According to embodiments, flexible code 400 is an example of a (4; 2; 3) flexible MDS code. Flexible code 400 includes a plurality of information symbols 4051−n (e.g., C1,1;C1,2:C1,3; C2,1; C2,2; C2,3)) and in this example six (6) information symbols. Flexible code 400 includes a plurality of parities 4101−n. (e.g., W1=C,1,1;+C1,2+C1,3, W′1=C1,1+2 C1,2+3 C1,3, are the parities for C1,1, C1,2, C1,3, and W2=C2,1+C2,2+C2,3; W′2=C2,1+2 C2,2+3 C2,3 are the parities for C2,1, C2,2, C2,3.
Framework for Flexible Storage Codes
Embodiments are direct to providing flexible storage codes and to provide a framework for flexible codes to convert a fixed (e.g., non-flexible) code construction into a flexible one. For the purpose of illustration, a flexible MDS code example is used. Other types of code constructions are discussed herein.
According to embodiments, a flexible storage code may be defined to incorporate a code word represented by an ×n array over , denoted as C ∈()n, where n is called the code length, and f is called the sub-packetization. Each column in the code may correspond to a storage node. Embodiments can include selectin some fixed integers j∈[α], j∈[], and recovery thresholds Rj∈[n]. Let the decoding columns be a subset of , ⊆[n] columns, and the decoding rows 1,2, . . . n, ⊆[] be subsets of rows each with size j. Denote by the j×Rj subarray of C that takes the rows 1 in the first column of Rj, the rows 2 in the second column of Rj, . . . , ad the rows k in the last column of Rj. The information will be reconstructed from this subarray. For flexible MDS codes, flexible MRS codes and flexible PMDS codes, the following is required: Rj=kj. According to embodiments, for the above types of codes, the parameter Rj may be omitted.
For flexible LRC codes, the following is required:
since the maximum distance is lower bounded by
The (n,k, ) flexible storage code is parameterized by (Rj,kj,j), j∈[α], for some positive integer α, such that kjj=k, 1≤j≤α, k1>k2> . . . >kα=k, α=. The flexible storage code encodes k information symbols over a finite field into n nodes, each with symbols. The code satisfies the following reconstruction condition for all j∈[α] from any Rj nodes, each node accesses a set of fj symbols, and we can reconstruct all the information symbols, for any |j∈[α]. That is, the code is defined by an encoding function ε:()k→()n, decoding functions :()kj→()k, for all j ⊆[n], j|=Rj, decoding rows 1,2, . . . ,Rj ⊆[], |1|=|2|= . . . =|Rj|=, which are dependent on the choice of decoding columns Rj.
The functions are chosen such that any information U∈()k can be reconstructed from the nodes in Rj:
(ε)=U
According to embodiments, a flexible MDS code is define as a flexible storage code, such that Rj=kj.
Embodiments are also directed to decoding. From any k1=3 nodes, each node accesses the first 1=2 symbols: The first 2 rows form a single parity-check (4; 3; 2) MDS code, and thus a device can determine information symbols from any 3 out of 4 symbols in each row. From any k2=2 nodes, each node accesses all the 2=3 symbols. According to embodiments, operations can include first decoding W′1 and W′2 in the last row since the last row is a (4; 2; 1) MDS code. Then, (C1,1, C1,2, C1,3, W1, W′1) and and (C2,1, C2,2, C2,3, W2, W′2) form two (5,3,1) MDS codes. According to embodiments, all the information symbols can be decoded from W′1 W′2 and any 2 columns of the first 2 rows.
According to embodiments, construction of a code may be based on a set of (n=kj−kα,kj,j−j−1) codes, each code called a layer, such that kjj=k,j∈[α], k1>k2> . . . kα=k,α=,0=0. The first layer is encoded from the original information symbols and other layers are encoded from the “extra parities”. The intuition for the flexible reconstruction is that after accessing symbols from some layers, we can decode the corresponding information symbols, which is in turn extra parity symbols in an upper layer. Therefore, the decoder can afford accessing less code word symbols in the upper layer, resulting in a smaller recovery threshold.
Each column, such as column 506, of storage codes 500 is a node. Note that only the first n columns under storage nodes are stored, and the extra parities are auxiliary. Set to =0. We have a layers, and Layer j,j∈[α], is an (n+kj−kα,kj,j−j−1) code
[Cj,1,Cj,2, . . . Cj,n,C′j,1,C′j,2, . . . C′j,kj . . . ka],
where Cj,1,i,Cj,2,i, . . . T∈, i ∈[n], are actually stored and C′j,i=[C′j,1,i,C′j,2,i, . . . T∈, i ∈[kj−kα], such as 511, are the auxiliary extra parities. The (n+k1−kα, k1 1) code in the first layer is encoded from the k11=k information symbols over , and the (n+kj−ka, kj, j−j,1) code in Layer j,j≥2, is encoded from extra parities C′j,1, for j′∈[j−1], kj−kα+1≤i≤kj−1−kα. As a sanity check Σj′=1j−1(kj−1−kj)(j′−j−1) extra parities over are encoded into Layer j, which matches the code dimension in that layer. In this example, 0=0 and kj−1 j−1=kjj.
According to embodiments, the construction discussed above can be applied to different kinds of codes. For an (n,k,) flexible MDS code, the entire information can be recovered from any kj nodes, each node accessing its first j symbols.
Embodiments also provide for the application of flexible codes to LRC (locally recoverable codes, PMDS (partial maximum distance separable) codes, and MSR (minimum storage regenerating) codes. These codes provide a flexible construction mechanism for the entire information, and either can reduce the single-failure repair cost (i.e., the number of helper nodes and the amount of transmitted information), or can tolerate a mixed types of failures. Application include failure protection in distributed storage systems and in solid-state drives.
Flexible LRC
An (n,k,,r) LRC code is defined as a code with length n, dimension k, sub-packetization size and locality . Locality here means that for any single node failure or erasure, there exists a group of at most r available nodes (called helpers) such that the failure can be recovered from them. The minimum Hamming distance of an (n,k,,r) LRC code is lower bounded in as
and LRC codes achieving the bound are called optimal LRC codes. For simplicity, using (n,k,r) LRC codes to present (n,k,,r) LRC codes with =1. According to embodiments, an LRC code may be optimal when (n,k,r) LRC codes encode k information symbols into
where each group
is an MDS code with dimension r and the whole code C has a minimum distance of
such that all the information symbols can be decoded from any
nodes.
According to embodiments, the (n,k,,r) flexible LRC code may be defined as parameterized by {(Rj,kj,j:1≤j≤α} as a flexible storage code, such that all the symbols of any node can be recovered by reading at most r other nodes, and
The above Rj matches the minimum distance lower bound. As a result, our definition of flexible LRC code may imply optimal minimum Hamming distance when we consider all symbols at each node. In the flexible LRC code, first, extra groups are generated in each row. Then, r extra parities are chosen from each extra group and encoded into lower layers. During information reconstruction, extra parities and hence extra groups are recovered from lower layers, leading to a smaller number of required access.
Applying optimal LRC codes by groups to the construction determined herein for flexible codes, a (n,k,,r) flexible LRC code may be parameterized by {(Rj,Kj,j):1≤j≤α}. In this example, n may be divisible by r+1 and all kj's are divisible by r here. The code is defined in of size at least
The resulting code turns out to be an (n,kj,j,r) LRC code when j symbols are accessed at each node. That is, for any single node failure, there exists a group of at most r helpers such that the failure can be recovered from them.
According to embodiments, the construction of a flexible LRC code results in a flexible LRC code with locality r and {(Rj,kj,j):1≤j≤α}.
According to embodiments, encoding can be applied such that in a layer j, an optimal LRC code of
may be applied to each row. The k information symbols in the 1 rows of Layer 1, and the remaining rows may be encoded from extra parities.
Embodiments include choosing the n stored symbols and the kj-ka extra parities in each row. In a
LRC code, there may be
groups. The
groups are selected containing n symbols. The, n stored symbols in each row form an (n,kj,r),j|∈[a] optimal LRC code. In the remaining
groups, r nodes may be selected in each group which contains kj−ka nodes, as extra parities. Since all the information symbols are encoded in Layer 1, information symbols can be decoded if enough dimensions to decode Layer 1 are obtained.
Embodiments allow for decoding all information symbols from any
nodes, each nodes accesses the first symbols.
From layer j, since each row of it is part of the
optimal LRC code, the layer can be decoded from Rj nodes by a property of the optimal LRC codes.
By way of example, given that 1<j′≤j and that Layers j′,j′+1, . . . j are decoded, the construction can provide that all the extra parities in Layer J′−1 are included as the information symbols in Layers j′, j′+1, . . . j and are decoded. Also, we know from the encoding part that the extra parities in Layer j′−1 include of the r parity symbols in each group of the
optimal LRC codes. Thus, according to the locality, the remaining symbol in all
groups in each row can be reconstructed. Therefore, we get additional
symbols in each row of Layer j′−1 from the extra parities. Together with the Rj nodes we accessed in each row of Layer j′−1, we get Rj′−1 symbols and, we are able to decode Layer j′−1.
Locality: Since each row is encoded as a LRC code with locality r, every layer and the entire code also have locality r.
By way of example, the following parameters may be set as (n,k,l,r)=(12, 4, 3, 2), (R1,k1,1)=(8,6, 2), (R2,k2,1)=(5,4, 2). The code is defined over =GF(24) {0,1,α, . . . , α14} where a is a primitive element of the field. In total there may be k information symbols, these may be assumed as u1,0, u1,1, . . . , u1,5, u2,0, u2,1, . . . , u2,5. The example is based on the optimal LRC code constructions. The construction is shown below, each column is a node with 3 symbols:
where every entry in Row m will be constructed as fm(x) for some polynomial fm(·) and some filed element x as below, m=1, 2, 3.
According to embodiments, evaluation points are divided into 4 groups asx∈A={A1={1,a5,a10}, A2={a,a6,a11},{a2,a7,a12}, A4={a3,a8,a13}}. In addition, A5={a4,a9,a14} may be set as the evaluation points group for the extra parities. By defining g(x)=x3, g(x) is a constant for each group Ai, i ∈[5]. Then the first two rows are encoded with
fm(x)=um,1+um,1g(x)+um,2g2(x))+x(um,3+um,4g(x)+um,5g2(x)),m=1.2.|
The last row is encoded with
f
3(x)=(f1(a4)+f1(a9)g(x))+x(f2(a4)+f2(a9)g(x)).
For each group, since g(x) is a constant, fm(x),m∈[3] can be viewed as a can be viewed as a polynomial of degree 2. Any single failure can be recovered from the other 2 available nodes evaluated by the points in the same group. The locality r=2 is achieved.
Noticing that f1(x) and f2(x) are polynomials of degree 7, all information symbols can be reconstructed from the first 1=2 rows of any R1=8 available nodes.
Moreover, f3(x) has degree 4. With R2=5 available nodes, we can first decode f1(a4),f1(a9),f2(a4),f2(a9) in row 3. Then, f1(a14),f2(a14) can be decoded due to the locality r=2. At last, together with the 5 other evaluations of f1(x) and f2(x) obtained in Rows 1 and 2, we are able to decode all information symbols
Flexible PMDS Codes
According to embodiments, flexible storage codes may be configured for PMDS codes.
PMDS codes may be used to overcome mixed types of failures in Redundant Arrays of Independent Disks (RAID) systems using Solid-State Drives (SSDs). A code consisting of an f x array is an (n,k,,s) PMDS code if it can tolerate n−k node or column failures and s additional arbitrary symbol failures in the code.
By way of example, letting 0=0 and {(kj,j):1≤j≤a} satisfying requirements of a flexible code. We define an (n,k,,s) flexible PMDS code parameterized by {(kj,j):1≤j≤a} such that any row in [j−1+1,j] is an (n,kj) MDS code, and from the first j rows, we can reconstruct the entire information if there are up to n-kj node failures and up to s additional arbitrary symbol failures, 1≤j≤a. As mentioned, for PMDS codes, Rj=kj.
Note that different from above, the number of information symbols for a flexible PMDS code is at most k−sK
According to embodiments, a flexible PMDS code can encode the information using Gabidulin code into auxiliary symbols, which are allocated to each layer according to kj,j ∈[a]. MDS codes with different dimensions are then applied to each row, thus ensuring flexible information reconstruction. Configurations may utilize a general construction of PMDS codes for any k and s using Gabidulin codes. In addition to the construction, embodiments include applying the construction to flexible PMDS codes.
An (N,K) Gabidulin code over the finite field =GF(qL),L≥|N is defined by the polynomial f(x)=Σi=0K−1uixf, where ui ∈i=0,1, . . . K−1 , 1 is the information symbol. The N code word symbols are f(a1),f(a2), . . . f(aN) where the N evaluation points {a1, . . . aN} are linearly independent over GF(q). From any K independent evaluation points over GF(q), the information can be recovered.
By way of example, the (n,k,,s) code word may be an ×n matrix over =GFsuch as
where each column is a node. By setting K=k−s, Cm,i ∈,m ∈[], i ∈[k] are the K+s code word symbols from a (K+s, K) Gabidulin code, and for each row j,m ∈[],
[Cm,k+1, . . . Cm,n]=[Cm,1, . . . Cm,k]GMDS,
where GMDS is k×(n-k) encoding matrix of an (n,k) systematic MDS code over GF(q) that generates the parity.
Considering that tm symbols in row m,m ∈ are equivalent to evaluations of f(x) with
evaluation points that are linearly independent over GF(q). Thus, with any n-k node failures and s symbol failures, we have tm≤k and
Then, with the K linearly independent evaluations of f(x), we can decode all information symbols. Next, we show how to construct flexible PMDS codes. Rather than generating extra parities as constructions above, the main idea here is that we divide our code into multiple layers, and each layer applies a construction with a different dimension.
Embodiments may construct a (n,k,,s) flexible PMDS code over GF(qN) parameterized by {(kj,j):1≤j≤a} satisfying requirements of a flexible code and with an (N,K) Gabidulin code over,
and a set of(n, kj) systematic MDS codes over GF(q).
With respect to encoding: Cj,mj,4 is the symbol in the mj-th row of Layer j, and in the i-th node,j∈[a], mj ∈[j−j−1], i ∈[n]. We first encode the K information symbols using the (N,K) Gabidulin code. Then, we set the first kj code word symbols in each row: Cj,mj, i, j ∈[a], mj ∈[j−j−1], i ∈[kj], as the code word symbols in the (N,K) Gabidulin code. The remaining n-kj code word symbols in each row are
[Cj,mj,kj+1, . . . Cj,mj,n]|=[Cj,mj,1, . . . Cj,mj,kj]Gm,k
where Gnk2 is the encoding matrix (to generate the parity check symbols) of the (n, k;) systematic MDS code over GF(q).
With respect to decoding: for n-kj failures, we access the first j rows (the first J layers) from each node. The code structure in each layer is similar to the general PMDS code, and a union of tmj symbols in Row mj of Layer j, j≤J, they are equivalent to evaluations of f(x) with
linearly independent points over GF(q) in GF(qN). Thus, with n-kj node failures and s symbol failures, we have tmj≤kj≤kj forj∈[J], and
Then, the information symbols can be decoded from K linearly independent evaluations of f(x).
Flexible MSR Codes
Regarding flexible MSR codes, embodiments can include a number of parity nodes is denoted by r=n−k1. The repair bandwidth is defined as the amount of transmission required to repair a single node erasure, or failure, from all remaining nodes (called helper nodes), normalized by the size of the node. For an (n, k) MDS code, the repair bandwidth is bounded by the minimum storage regenerating (MSR) bound as
An MDS code achieving the MSR bound is called an MSR code. For MSR vector codes, each symbol is a vector. As one of the most popular codes in practical systems, Reed-Solomon (RS) code, where each symbol is a scalar.
As discussed herein, a set of MDS codes can recover the information symbols by any pair (kj,j), which means that for the first symbols in each node, code according to embodiments is an (n,kj,j) MDS code. In addition, we require the optimal repair bandwidth property for flexible MSR codes. A flexible MSR code is defined to be a flexible storage code as such that Rj=kj, and a single node failure is recovered using a repair bandwidth satisfying the MSR bound.
According to embodiments, codes in this section may be similar to constructions according to other embodiments above, with additional restrictions on the parity check matrices and the extra parities. A key point here may be that extra parities and the information symbols in lower layers are exactly the same and they also share the same parity check sub-matrix. To repair the failed symbol with smallest bandwidth, the extra parities are viewed as additional helpers and the required information can be obtained for free from the repair of the lower layers.
An example is provided to illustrate 2 layers and then present our constructions based on vector and scalar MSR codes, respectively.
Constructing an (n, k, )=(4, 2, 3) flexible MSR code parameterized by (k1,1)=(3, 2) and (k2,2)=(2; 3). The reconstruction the entire information and the repair bandwidth are described above. Initially, the following is set F=GF(22)={0,1,β, β2=1+β}, where β is a primitive element of GF(22). Our construction is based on the following (4; 2; 2) MSR vector code over 2 with parity check matrix
where each hij is a 2×2 matrix over . A code word symbol ci is .i=1,2,3,4, and the code word [c1T,c2T,c3T,c4T]T∈(2)4 in the null space of H. A (4; 2) MDS code may be checked using any two code word symbols suffice to reconstruct the entire information. The repair matrix is defined as
When node * ∈{1,2,3,4} fails, the node c* can be repaired by equations S.×H×[c1T,c2T,c3T,c4T]T=0. In particular, helper i,i≠* transmits
which is 1 symbol in , achieving an optimal total repair bandwidth of 3 symbols in .
According to embodiments, a flexible MSR code includes every entry in the code array is a vector in 2. The code array is shown as below, each column being a node
The code has 2 layers, where C1,m2,i ∈2 are in Layer 1 and C2,m2,i are in Layer 2 with m1=1,2,m2=1, i ∈[4]. Each Cj,mj,i is the vector [Cj,mj,i,1,Cj,mj,i,2]Twith elements in . The code totally contains 48 bits with 24 information bits, and each node contains 12 bits. We define the code with the 3 parity check matrices shown below. Let
The code is defined by
H
1×[C1,1,1T,C1,1,2T,C1,1,3T,C1,1,4T,C2,1,1T]T=0,
H
2×[C1,2,1T,C1,2,2T,C1,2,3T,C1,2,4T,C2,1,2T]T=0,
[H3×[C2,1,1T,C2,1,2T,C2,1,3T,C2,1,4T]T=0.
According to an exemplary embodiment, (n,k,)=(4; 2; 3) flexible MSR code parameterized by (kj,j)∈{(3,2),(2,3)}. To check that the that the code defined by H1 or H2 is an (5, 2) MDS code, and H3 defines an (4; 2) MDS code, an index of a failed node may be * ∈{1,2,3,4}. For the repair, it is first noted that for i=1,2.
According to embodiments a repair matrix S* as described here can repair a failed node *.
S.×H
1×[C1,1,1T,C1,1,2T,C1,1,3T,C1,1,4T,C2,1,1T]T=0,
S.×H
2×[C1,2,1T,C1,2,2T,C1,2,3T,C1,2,4T,C2,1,2T]T=0,
S.×H
3×[C2,1,1T,C2,1,2T,C2,1,3T,C2,1,4T[T=0.
For helper i ∈[4], i≠*, transmits
where
For any failed node, one 1 symbol is needed form each of the remaining Cj,mj,ij, which meets the MSR bound.
In this example, codes in the first layer are not required to be MSR codes, thus resulting in a smaller field. However, the rank condition guarantees the optimal repair bandwidth for the entire code. Also, in our general constructions, we do not require the codes in Layers 1 to a −1 to be MSR codes.
Embodiments are also directed to construction of flexible MSR codes by applying a construction to the vector MSR code and the RS MSR code.
Flexible MSR Codes with Parity Check Matrices
According to embodiments, codes may be defined by parity check matrices and certain choices of the parity check matrices allow for a flexible MSR code. According to embodiments, a second construction is provided. The code, for example, may be defined in some L parameterized by (kj,j),j ∈[a] such that kjj=k, k1>k2> . . . ka=k,a=. The parity check matrix for the mj-th row in Layerj∈[a] is defined as
H
j,mj=[hj,mj,1. . . hj,mj,n gj,m,1. . . gj,mj,kj−ka]
where each hj,mj,2j,gj,mj,2 is an rL×L matrix with elements in . The MDS code mj-th row in Layer j is defined by
H
j,mj×[Cj,mj,1T,Cj,mj,2T, . . . Cj,mj,nT,C′j,mj,1T, . . . C′j,mj,kj+kaT]T=0,
where Cj,mj,i are the stored code word symbols and C′j,mj,i are the extra parities. In this construction, when we encode the extra parities into lower layers, we set the code word symbols and the corresponding parity check matrix entries exactly the same. Specifically, for Layers j<j′≤a the following may be set as
g
j,x,y
=h
j′,x′,y′,
C′
j,x,y
=C
j′,x′,y′.
Here, forx∈[Ij−Ij−1], kj′−ka+1≤y≤kj′−1−ka′, then gj,x,y corresponds to hj′,x′,y′in Layer j′; and
where “mod” denotes the modulo operation.
As a result, the 2 extra parities in Layer 1 are exactly the same as the first 2 symbols in Layer 2 with C′1,1,1=C2,1,1,g1,1,1=h2,1,1 and C′1,2,1=C2,1,2,g1,2,1=h2,1,2.
The code defined by the second construction is a flexible MSR code when an MDS condition and Ran condition satisfy requirements for MDS codes and the repair matrices can be used for every parity check matrix. When an MDS property is satisfied, the second construction is the same as the first construction described above by defining the MDS codes with parity check matrices. For repair, a repair matrix may be used in each row to repair failed nodes. For example, assuming node *,* ∈[n] failed, a repair matrix S. may be used in each row repair, such that
S.×H
j,mj×[Cj,mj,1T,Cj,m,2T, . . . C3,mj,nT,C′j,mj,1T, . . . C′j,mj,kj−kaT]T=0.
The information symbols C′j,mj,1, . . . ,C′j,mj,kj−ka in the lower layers with the same corresponding parity check sub-matrices can be retrieved from lower levels such that the node can be repaired by n-1 helpers. Only L/r symbols are needed from each helper in order to achieve the optimal repair bandwidth.
Assuming the field size, >rn and λij ∈, i ∈[n], j=0,1, . . . r−1 and A are rn distinct elements. The parity matrix for the (n k) MSR code can be represented as
where I is the L×L identify matrix and
is a vector of length L=rn with all elements 0 except the z-th element which is equal to 1. The r-ary expansion of z may be described as z=(zn,Zn−1 . . . z1), where 0≤z1≤r−1 is the i-th digit from the right and
As such, Ai is an L×L diagonal matrix with element λi,rc. The L×L repair matrix S.,*∈[n] are also defined as:
S.=Diag(D.,D., . . . D.)
with
matrix D. and it is shown that:
Here, for 0≤x≤rn−1−1,0≤y≤rn−1, the (x, y)-th entry of D. equals 1 if the r-ary expansion of x and y satisfies (xn−1,xn−1, . . . x1)=(yn,yn−1, . . . yi+1,yi−1, . . . y1), and otherwise it equals 0.
Considering an extended field from and denoting /{0}, /{0}.The can be partitioned to
cosets {β1,β2, . . . β1}, for some elements β1,β2, . . . β1 in . The storage nodes may be defined (the first n nodes):
where βj,mj is chosen from {β1,β2, . . . β1}. The additional coefficient may be set as βj,m. Then, the extra parity entries gj,mj,i can be obtained and Ai might show in Hj,mj in several times since the extra parity matrices are the same as the information symbols in lower layers. We choose the additional coefficients as below. Condition 1. In each Hj,mj,the additional coefficients for the same Ai are distinct. With parity check matrices, Construction 2 is a flexible MSR code.
To calculate the required field size, the disclosure discusses how many additional coefficients are required for our flexible MSR codes satisfying Condition 1. In the following, we propose two possible coefficient assignments. It should be noticed that one might find better assignments with smaller field sizes.
The simplest coefficient assignment assigns different additional coefficients to different rows, i.e., βj,mj to Row mj in Layer j for the storage nodes (the first n nodes). By doing so, the parity check matrix Bj,mj,Ai,j ∈[a], mj [∈j−j−1]i ∈[n] will show at most twice in Construction 2, i.e., in Layer j corresponding to storage Node i, and in Layer j′ corresponding to an extra parity, for some j>j′. Hence, the same Ai will correspond to different additional coefficients in the same row and Condition 1 is satisfied. In this case, we need a field size of .
In the second assignment, we assign different additional coefficients in different layers for the storage nodes (the first n nodes), but for different rows in the same layer, we might use the same additional coefficient. For a given row, the storage nodes will not conflict with the extra parities since the latter correspond to the storage nodes in other layers. Also, the extra parities will not conflict with each other if they correspond to the storage nodes in different layers. Then, we only need to check the extra parities in the same row corresponding to storage nodes in the same layer. For the extra parities/storage nodes gj,x,y=hj′,x′,y′, given j,x,j′,y′, the additional coefficients should be different. In this case kj−ka−1≤y≤kj−1−ka, and there will be at most
that make y′ a constant.
By assigning
number of β in Layer j′,J′≥2 (in Layer 1 we only need one β), Condition 1 is satisfied.
The total number of required additional coefficients is
Since (kj−1−kj)j−1=kj(j−j−1), then
In the best case, kj−1−kj≤kj for all j, the number of coefficients is a, and ||≥a|.
Compare constructions with another flexible MSR constructions, embodiments provide code such that in each node (n−k)
Embodiments are also directed to Flexible RS MSR codes including construction of Reed-Solomon (RS) MSR codes.
An RS(n, k) code over the finite field may be defined as
RS(n,k)={(f(a1),f(a2), . . . ,f(an)):f∈[x],deg(f)≤k−1}.
where the evaluation points are define as {a1,a2, . . . an}⊆ and deg ( ) denotes the degree of a polynomial. The encoding polynomial f(x)=u0+u1x+ . . . uk−1xK−1, where ui ∈, i=0,1, . . . k−1 are the information symbols. Every evaluation symbol f(ai),i ∈[n] is called a code word symbol. RS codes are MDS codes, namely from any k code word symbols, the information can be recovered.
By letting be the base of field such that =L. For reparing RS codes, any linear repair scheme for a given RS(n,k) over the finite field =Lis equivalent to finding a set of repair polynomials P (x) such that for the failed node (α),*∈[n],
where the rank is defined as rankB({γ1,γ2, . . . γi}) the cardinality of a maximum subset of {γ1,γ2, . . . γi} that is linearly independent over .
The transmission from helper f(αi) is
(p (αi)f(αi)),v∈[L],
where the trace function (x) is a linear function such that for allx∈.∈. The repair bandwidth for the i-th helper is bi=({p (αi):v ∈L}) symbols in . The Flexible RS MSR code construction is similar to Construction 2 based on parity check matrices.
According to embodiments, a third construction (construction 3) defines a code in =GF(qL) with a set of pairs (kj,j),j ∈[α] such that kjj=k,k1>k2> . . . kα=k,α=, r=n−k. In the m-th row in layer the code word symbols Cj,mj,i, i ∈[n] are define as
and the extra parities C′j,mj,i,i ∈[kj−kα] are defined as C′j,mj,i=fj,mj(αj,mj,i+n), where {fj,mj(αj,mj,i),i ∈[n+kj−kα]} is code.
The encoding polynomial fj,mj(x) and the evaluation point αj,mj,i may be defined.
In this construction, extra parities and the corresponding evaluation points are set exactly the same as the information symbols in lower layers, and extra parities are arranged in the same way as in Construction 2. Specifically, for C′j,x,y in Layer j, x ∈[I2−Ij−1], when kj−kj′−1+1≤y≤kj−k for j+1≤j′≤α, it is encoded to Layer j′ with αj,xy+n=αj′,x′, y′ and C′j,x,y=Cy′,x′,y′,with x′,y′. The encoding polynomial f ,mj(x) ∈ in Layer j′ is defined by the kj evaluation points and the code word symbols from the extra parities.
Latency
In this section, we analyze the latency of obtaining the entire information using codes according to embodiments with a flexible number of nodes.
One of the key properties of the flexible storage codes presented in this disclosure is that the decoding rows are the first j rows if we have Rj available nodes. As a result, the decoder can simply download symbols one by one from each node, and symbols of Layer j can be used for Layers j,j+1, . . . α. For one pair of (Rj,j), a random variable Tj associated with the time for the first Rj nodes transmitting the first j symbols. Tj is called the latency for the j-th layer. Instead of predetermining a fixed pair (R,) for the system, flexible storage codes allow us to use all possible pairs (Rj,j),j∈[α]. The decoder downloads symbols from all n nodes and as long as it obtains j symbols from Rj nodes, the download is complete. For flexible codes with Layers 1; 2; :::; a, we use T1,2, . . . α=min(Tj,j∈[α]) to represent the latency.
Notice that for the fixed code with the same failure tolerance level, i.e., R=Ra,=a, latency is Ta. Since T1,2, . . . α=min(Tj,j∈[α])≤Tα, and given the storage size per node , the number of nodes n, and recovery threshold R=Rα, the flexible storage code can reduce the latency of obtaining the entire information compared to any fixed array code.
Assuming the probability density function (PDF) of Tj is PRj,j(t), the expected delay can be calculated as
If a fixed code is adopted, one can optimize the expected latency and get an optimal pair (R*,*) for a given distribution. However, a flexible storage code still outperforms such an optimal fixed code in latency due to embodiments described herein and in particular the flexible storage code. Moreover, in practice the choice of (n,k,R,) depends on the system size and the desired failure tolerance level and is not necessarily optimized for latency
Regarding Hard Disk Drive (HDD) storage system as an example to calculate the latency of flexible storage codes according to embodiments, additional latency can be saved compared to a fixed MDS code. In this part, we compute the overall latency of a flexible code with (R1,1),(R2,2),and length n. Comparing latency with the latency of fixed codes with (n, R1, 1) and (n, R2, 2), respectively.
In the HDD latency model, the overall latency consists of the positioning time and the data transfer time. The positioning time measures the latency to move the hard disk arm to the desired cylinder and rotate the desired sector to under the disk head. As the accessed physical address for each node is arbitrary, we assume the positioning time is a random variable uniformly distributed, denoted by U(0,tpos), where tpos is the maximum latency required to move through the entire disk. The data transfer time is simply a linear function of the data size, and we assume the transfer time for a single symbol in our code is ttrans. Therefore, the overall latency model is x+·ttrans, where x˜U(0,tpos) and is the number of accessed symbols.
Consider an (n, R, ) fixed code. When R nodes finish the transmission of symbols, all the information is obtained. The corresponding latency is called the R-th order statistics. For n independent random variables satisfying U(0,tpos), the R-th order statistics for the positioning time, denoted by UR, satisfies a beta distribution:
U
R˜Beta(R,n+1−R,0,tpos).
with expectation
For a random variableY˜Beta(α,β, a, c), the probability density function (pdf) is defined as
is the beta function.
The expectation of overall latency for an (n,R1,1) fixed code, denoted by T1, is
Similarly, the expected overall latency E(T2) for the fixed (n,R2,2) code is
Considering the flexible code with 2 layers, the difference of the positioning times UR
ΔU=UR
The expectation of the overall latency for the flexible code denoted by T1,2, may be denoted as
where the last term is saved latency compared to an (n,R1,1) code. The saved latency can be calculated as:
is the beta function.
The expectation of overall latency for an (n,R1,1) fixed code, denoted by T1 is
Similarly, the expected overall latency E(T2) for the fixed (n,R2,2) code is
Considering the flexible code with 2 layers, the difference of the positioning times UR1 and UR2 is
ΔU=UR
As a result, an expectation of the overall latency for flexible code described herein, denoted by T1,2, may be
where the last term is saved latency compared to an (n,R1,1) code.
The latency of a fixed MDS code is a function of n,R,,tpos, and ttrans.
According to embodiments, one can optimize the code reconstruction threshold R* based on other parameters. However, the system parameters might change over time and one “optimal” R* cannot.
In conclusion, embodiments are directed to flexible storage codes and investigation of the construction of such codes under various settings. Our analysis shows the benefit of our codes in terms of latency.
While this disclosure has been particularly shown and described with references to exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the claimed embodiments.
This application claims priority to U.S. provisional application No. 63/222,218 titled SYSTEMS AND METHODS FOR DISTRIBUTED STORAGE USING STORAGE CODES WITH FLEXIBLE NUMBER OF NODES filed on Jul. 15, 2021, the content of which is expressly incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63222218 | Jul 2021 | US |