This application claims priority to Russian Application Number 2015155753, filed on Dec. 25, 2015, entitled “ERASURE CODING FOR ELASTIC CLOUD STORAGE,” which is incorporated herein by reference in its entirety.
As is known in the art, distributed storage systems (or “clusters”) may provide a wide range of storage services, while achieving high scalability, availability, and serviceability. Some distributed storage systems—including Elastic Cloud Storage (ECS) from EMC Corporation of Hopkinton, Mass.—use erasure coding for data protection.
Existing implementations of erasure coding within distributed storage systems may be inefficient in terms of generated network traffic and elapsed encoding time.
According to one aspect of the disclosure, method is provided use with a distributed storage system comprising a plurality of storage nodes each having attached storage devices. The method may include: receiving a request from a client to store data; storing a copy of the data within the storage devices attached to a first storage node; storing a copy of the data within the storage devices attached to a second storage node; returning an acknowledgement to the client; scheduling a first erasure encoding task on the first storage node; scheduling a second erasure encoding task on the second storage node; executing, on the first storage node, the first erasure encoding task to generate a first plurality of coded fragments using the copy of the data stored within attached storage devices; executing, on the second storage node, the second erasure encoding task to generate a second plurality of coded fragments using the copy of the data stored within attached storage devices; and storing the first and second pluralities of coded fragments within storage devices attached to at least two different storage nodes.
In some embodiments, returning an acknowledgement to the client occurs before scheduling the first or second erasure encoding tasks.
In various embodiments, the method further includes dividing the data into a plurality of data fragments and storing the plurality of data fragments within storage devices attached to at least two different storage nodes. The data fragments and coded fragments can be stored in different nodes. In certain embodiments each of the data fragments have the same size.
In some embodiments, the method further comprises: deleting the copy of the data from the storage devices attached to a first storage node and deleting the copy of the data from the storage devices attached to a first second node.
In certain embodiments, the first and second erasure encoding tasks are executed in parallel. In particular embodiments, scheduling the first erasure encoding task on the first storage node comprises adding the first erasure encoding task to a queue within the first storage node.
According to another aspect of the disclosure, a distributed storage system includes: a plurality of storage nodes having attached storage devices. A first storage node from the plurality of storage nodes may have attached storage devices and be configured to: receive a request from a client to store data; store a copy of the data within the storage devices attached to a second storage node; store a copy of the data within the storage devices attached to a third storage node; return an acknowledgement to the client; schedule a first erasure encoding task on the second storage node; and schedule a second erasure encoding task on the third storage node. The second storage node may have attached storage devices and be configured to: execute the first erasure encoding task to generate a first plurality of coded fragments using the copy of the data stored within attached storage devices; and store the first plurality of coded fragments within storage devices attached to at least two different storage nodes. The third storage node may have attached storage devices and be configured to: execute the second erasure encoding task to generate a second plurality of coded fragments using the copy of the data stored within attached storage devices; and store the second plurality of coded fragments within storage devices attached to at least two different storage nodes.
In some embodiments, the first storage node is configured to return an acknowledgement to the client occurs before scheduling the first or second erasure encoding tasks.
In various embodiments, the first storage node is configured to divide the data into a plurality of data fragments and store the plurality of data fragments within storage devices attached to at least two different storage nodes. The data fragments and coded fragments may be stored in different storage nodes. In certain embodiments, the data fragments each have the same size.
In some embodiments, the second and third storage nodes are further configured to delete the copy of the data from their attached storage devices. In particular embodiments, the first and second erasure encoding tasks are executed in parallel. In certain embodiments, the first storage node is configured to add the first erasure encoding task to a queue within the second storage node.
In some embodiments, the techniques described herein can eliminate unnecessary network traffic by scheduling and executing erasure coding tasks on storage nodes that have local copies of data. In certain embodiments, erasure coding tasks are executed in parallel on multiple different nodes, thereby reducing the elapsed encoding time. In various embodiments, the techniques can be used with ECS and other distributed systems that use erasure coding.
The concepts, structures, and techniques sought to be protected herein may be more fully understood from the following detailed description of the drawings, in which:
The drawings are not necessarily to scale, or inclusive of all elements of a system, emphasis instead generally being placed upon illustrating the concepts, structures, and techniques sought to be protected herein.
Before describing embodiments of the structures and techniques sought to be protected herein, some terms are explained. As used herein, the phrases “computer,” “computing system,” “computing environment,” “processing platform,” “data memory and storage system,” and “data memory and storage system environment” are intended to be broadly construed so as to encompass, for example, private or public cloud computing or storage systems, or parts thereof, as well as other types of systems comprising distributed virtual infrastructure and those not comprising virtual infrastructure. The terms “application,” “program,” “application program,” and “computer application program” herein refer to any type of software application, including desktop applications, server applications, database applications, and mobile applications.
As used herein, the term “storage device” refers to any non-volatile memory (NVM) device, including hard disk drives (HDDs), flash devices (e.g., NAND flash devices), and next generation NVM devices, any of which can be accessed locally and/or remotely (e.g., via a storage attached network (SAN)). The term “storage device” can also refer to a storage array comprising one or more storage devices.
In general operation, clients 102 issue requests to the storage cluster 104 to read and write data. Write requests may include requests to store new data and requests to update previously stored data. Data read and write requests include an ID value to uniquely identify the data within the storage cluster 104. A client request may be received by any available storage node 106. The receiving node 106 may process the request locally and/or may delegate request processing to one or more peer nodes 106. For example, if a client issues a data read request, the receiving node may delegate/proxy the request to peer node where the data resides. In various embodiments, the cluster 104 uses erasure coding to protect data stored therein, as described below in conjunction with
In various embodiments, the distributed storage system 100 comprises an object storage system, wherein data is read and written in the form of objects, which are uniquely identified by object IDs. In some embodiments, the storage cluster 104 utilizes Elastic Cloud Storage (ECS) from EMC Corporation of Hopkinton, Mass.
In some embodiments, the system 100 employs a flat cluster architecture whereby cluster-level services are distributed evenly among the nodes. To implement cluster-level services using a flat cluster architecture, processing may be coordinated and shared among several nodes using the concept of object ownership. An object stored within the system 100, including system objects and user data, may be owned by a single node 106 at any given time. When a node owns an object, it may be solely responsible for handling updates to the object or for performing other processing associated with the object. Notably, a given node may own an object (e.g., user data) without having a copy of that object's data stored locally (i.e., the object data can be stored on one or more remote nodes).
In the example shown, a storage node 106′ includes the following services: an authentication service 108a to authenticate requests from clients 102; storage API services 108b to parse and interpret requests from clients 102; a storage chunk management service 108c to facilitate storage chunk allocation/reclamation for different storage system needs and monitor storage chunk health and usage; a storage server management service 108d to manage available storage devices capacity and to track storage devices states; and a storage server service 108e to interface with the storage devices 110.
A storage device 110 may comprise one or more physical and/or logical storage devices attached to the storage node 106a. A storage node 106 may utilize VNX, Symmetrix VMAX, and/or Full Automated Storage Tiering (FAST), which are available from EMC Corporation of Hopkinton, Mass. While vendor-specific terminology may be used to facilitate understanding, it is understood that the concepts, techniques, and structures sought to be protected herein are not limited to use with any specific commercial products.
Referring to
The distribution matrix 204 may be a (k+m)×k matrix comprising a first sub-matrix 204a having k rows and a second sub-matrix (referred to as the “coding matrix”) 204b having m rows. The first sub-matrix 204a may be an identity matrix, as shown. In this form, the distribution matrix 204 can be multiplied by a data column vector 202 to result in a data-and-coding column vector 206 comprising the k data fragments 206a and the m coded fragments 206b.
The coding matrix 204b includes coefficients Xi,j which may be selected using known erasure coding techniques. In some embodiments, the coding coefficients are selected such that the system can tolerate the loss of any m fragments. The coefficients Xi,j may be selected based upon a specific erasure coding algorithm used.
It will be appreciated that the encoding process can be performed as m independent dot products using individual rows from the coding matrix 204b and the data column vector 202. In particular, the ith coded fragment Ci can be calculated as the dot product of the ith row of the coding matrix 204b with the data column vector 202. In some embodiments, the system takes advantage of this fact to perform parallel coding across multiple storage nodes, as described further below in conjunction with
The data fragments D1, D2, . . . , Dk and coded fragments C1, C2, . . . , Cm may be distributed among the cluster storage nodes 106 (
If a data fragment D1, D2, . . . , Dk is lost (e.g., due to a node failure, a storage device failure, or data corruption), the lost fragment may be regenerated using a decoding matrix (not shown), available data fragments from D1, D2, . . . , Dk, and coded fragments C1, C2, . . . , Cm. The decoding matrix can be constructed as an inverse of modified distribution matrix 204 using known techniques (which may take into account which data fragments were lost). At least k unique available fragments (either data fragments or coded fragments) may be required to decode a lost data fragment.
Referring to
To reduce the amount of time a user/client must wait when storing new data, the system 300 may use a delayed coding technique. As shown by example in
In the example of
After an acknowledgement is sent to the client, the node that owns the data D may schedule a erasure coding task to generate m coded fragments C1, C2, . . . , Cm. In some embodiments, storage nodes maintain a queue of coding tasks and scheduling a task corresponds to adding a task to an appropriate task queue (sometimes referred to as “enqueuing” a task). In certain embodiments, the erasure coding task is scheduled and executed on the owner node itself. However, if the distributed storage system uses a flat cluster architecture the owner node may not have a local copy of the data. Thus, using this local approach, the owner node might be required to retrieve the data from remote nodes, generating unnecessary network traffic. For example, in
Referring to
In the example of
After the coded fragments are generated, the remote node 314 can store the coded fragments C1, C2, . . . , Cm across multiple different storage nodes according to a desirable data layout. For example, in
Once the data fragments and the coded fragments are safely stored, the complete copies of the data D can be deleted. In the example of
Any suitable technique can be used to schedule coding tasks to multiple different remote nodes. For example, if two nodes have a complete copy of the data D, both of those nodes may be tasked with generating half (i.e., m/2) of the coded fragments.
In the example of
Referring to
The new data is owned by a storage node, which does not necessarily have local copy of the data. At block 410, the owner node identifies that multiple nodes that include a complete copy of the data and selects one or more of those nodes for erasure coding. At block 412, the owner node schedules remote erasure coding tasks on each of the selected nodes. In some embodiments, the owner node tasks different remote nodes with generating different coded fragments.
At block 414, the erasure encoding tasks are executed locally on each of the selected nodes to generate coded fragments. If multiple nodes are selected, the encoding tasks may be performed in parallel. At block 416, the coded fragments are stored across multiple storage nodes. After the coded fragments are stored, the complete copies of the data can be deleted from the cluster (block 418).
Processing may be implemented in hardware, software, or a combination of the two. In various embodiments, processing is provided by computer programs executing on programmable computers/machines that each includes a processor, a storage medium or other article of manufacture that is readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices. Program code may be applied to data entered using an input device to perform processing and to generate output information.
The system can perform processing, at least in part, via a computer program product, (e.g., in a machine-readable storage device), for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers). Each such program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the programs may be implemented in assembly or machine language. The language may be a compiled or an interpreted language and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. A computer program may be stored on a storage medium or device (e.g., CD-ROM, hard disk, or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer. Processing may also be implemented as a machine-readable storage medium, configured with a computer program, where upon execution, instructions in the computer program cause the computer to operate.
Processing may be performed by one or more programmable processors executing one or more computer programs to perform the functions of the system. All or part of the system may be implemented as special purpose logic circuitry (e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit)).
All references cited herein are hereby incorporated herein by reference in their entirety.
Having described certain embodiments, which serve to illustrate various concepts, structures, and techniques sought to be protected herein, it will be apparent to those of ordinary skill in the art that other embodiments incorporating these concepts, structures, and techniques may be used. Elements of different embodiments described hereinabove may be combined to form other embodiments not specifically set forth above and, further, elements described in the context of a single embodiment may be provided separately or in any suitable sub-combination. Accordingly, it is submitted that scope of protection sought herein should not be limited to the described embodiments but rather should be limited only by the spirit and scope of the following claims.
Number | Date | Country | Kind |
---|---|---|---|
2015155753 | Dec 2015 | RU | national |
Number | Name | Date | Kind |
---|---|---|---|
6070003 | Gove et al. | May 2000 | A |
6550035 | Okita | Apr 2003 | B1 |
7549110 | Stek et al. | Jun 2009 | B2 |
7559007 | Wilkie | Jul 2009 | B1 |
7581156 | Manasse | Aug 2009 | B2 |
8458515 | Saeed | Jun 2013 | B1 |
8532212 | Ito | Sep 2013 | B2 |
8683296 | Anderson et al. | Mar 2014 | B2 |
8683300 | Stek et al. | Mar 2014 | B2 |
8762642 | Bates | Jun 2014 | B2 |
8914706 | Anderson | Dec 2014 | B2 |
9053114 | Lemar | Jun 2015 | B1 |
20050038968 | Iwamura | Feb 2005 | A1 |
20060105724 | Nakao | May 2006 | A1 |
20060147219 | Yoshino et al. | Jul 2006 | A1 |
20070177739 | Ganguly | Aug 2007 | A1 |
20080126357 | Casanova | May 2008 | A1 |
20090112953 | Barsness et al. | Apr 2009 | A1 |
20100091842 | Ikeda et al. | Apr 2010 | A1 |
20100180176 | Yosoku et al. | Jul 2010 | A1 |
20100246663 | Citta et al. | Sep 2010 | A1 |
20110053639 | Etienne Suanez et al. | Mar 2011 | A1 |
20110055494 | Roberts et al. | Mar 2011 | A1 |
20110196900 | Drobychev et al. | Aug 2011 | A1 |
20120051208 | Li et al. | Mar 2012 | A1 |
20120106595 | Bhattad et al. | May 2012 | A1 |
20130067187 | Moss et al. | Mar 2013 | A1 |
20140046997 | Dain et al. | Feb 2014 | A1 |
20140047040 | Patiejunas | Feb 2014 | A1 |
20160239384 | Slik et al. | Aug 2016 | A1 |
20170046127 | Fletcher et al. | Feb 2017 | A1 |
Entry |
---|
U.S. Appl. No. 15/281,172, filed Sep. 30, 2016, Trusov et al. |
U.S. Appl. No. 15/398,832, filed Jan. 5, 2017, Danilov et al. |
U.S. Appl. No. 15/398,826, filed Jan. 5, 2017, Danilov et al. |
U.S. Appl. No. 15/398,819, filed Jan. 5, 2017, Danilov et al. |
Anvin, “The Mathematics of RAID-6;” First Version Jan. 20, 2004; Last Updated Dec. 20, 2011; Retrieved from https://www.kernel.org/pub/linux/kernel/people/hpa/raid6.pdf; 9 Pages. |
Blömer et al., “An XOR-Based Erasure-Resilient Coding Scheme;” Article from CiteSeer; Oct. 1999; 19 Pages. |
U.S. Appl. No. 14/929,788, filed Nov. 2, 2015, Kurilov et al. |
U.S. Appl. No. 15/083,324, filed Mar. 29, 2016, Danilov et al. |
U.S. Appl. No. 15/193,144, filed Jun. 27, 2016, Kurilov et al. |
U.S. Appl. No. 15/193,141, filed Jun. 27, 2016, Danilov et al. |
U.S. Appl. No. 15/186,576, filed Jun. 20, 2016, Malygin et al. |
U.S. Appl. No. 15/193,145, filed Jun. 27, 2016, Fomin et al. |
U.S. Appl. No. 15/193,142, filed Jun. 27, 2016, Danilov et al. |
U.S. Appl. No. 15/193,409, filed Jun. 27, 2016, Trusov et al. |
U.S. Appl. No. 15/620,892, filed Jun. 13, 2017, Danilov et al. |
U.S. Appl. No. 15/620,897, filed Jun. 13, 2017, Danilov et al. |
U.S. Appl. No. 15/620,898, filed Jun. 13, 2017, Danilov et al. |
U.S. Appl. No. 15/620,900, filed Jun. 13, 2017, Danilov et al. |
Response to U.S. Non-Final Office Action dated Nov. 27, 2017 for U.S. Appl. No. 15/186,576; Response filed Feb. 23, 2018; 7 pages. |
U.S. Final Office Action dated Mar. 1, 2018 for U.S. Appl. No. 15/193,145; 32 pages. |
U.S. Final Office Action dated Mar. 2, 2018 for U.S. Appl. No. 15/193,409; 10 pages. |
U.S. Non-Final Office Action dated Feb. 2, 2018 for U.S. Appl. No. 15/398,826; 16 Pages. |
Office Action dated Nov. 27, 2017 from U.S. Appl. No. 15/186,576; 11 Pages. |
Office Action dated Dec. 14, 2017 from U.S. Appl. No. 15/281,172; 9 Pages. |
Response to Office Action dated Sep. 15, 2017 from U.S. Appl. No. 15/193,409, filed Dec. 14, 2017; 11 Pages. |
Response to Office Action dated Oct. 18, 2017 from U.S. Appl. No. 15/193,145, filed Jan. 17, 2018; 12 Pages. |
U.S. Non-Final Office Action dated Oct. 18, 2017 for U.S. Appl. No. 15/193,145; 21 pages. |
U.S. Non-Final Office Action dated Sep. 15, 2017 for U.S. Appl. No. 15/193,409; 12 pages. |
Number | Date | Country | |
---|---|---|---|
20170185330 A1 | Jun 2017 | US |