The field relates generally to data storage and more particularly to parallel file systems and other types of cluster file systems.
Parallel storage systems are widely used in many computing environments. Parallel storage systems provide high degrees of concurrency in which many distributed processes within a parallel application simultaneously access a shared file namespace. Parallel computing techniques are used in many industries and applications for implementing computationally intensive models or simulations.
In many parallel computing applications, a group of distributed processes must often write data to a shared file. When multiple processes attempt to write data to a shared file concurrently, however, the performance of the parallel storage system will be impaired. Serialization can cause significant performance degradation as the parallel processes must remain idle while they wait for one another. Serialization is incurred when the parallel file system locks a shared file in order to maintain the consistency of the shared file.
Parallel Log Structured File System (PLFS) is a virtual log-structured file system that allows data to be written quickly into parallel file systems. PLFS is particularly useful when multiple applications write concurrently to a shared file in a parallel file system. Generally, PLFS improves write performance in this context by rearranging the IO operations from being write operations to a single file to being write operations to a set of sub-files. Metadata is created for each sub-file to indicate where the data is stored. The metadata is resolved when the shared file is read. One challenge, however, is that the amount of metadata required to be read data back can be extremely large. Each reading process must read all of the metadata that was created by all of the writing processes. Thus, all of the reading processes are required to redundantly store the same large amount of metadata in a memory cache.
A need therefore exists for improved techniques for storing metadata associated with sub-files from a single shared file in a parallel file system.
Embodiments of the present invention provide improved techniques for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. In one embodiment, a compute node of a parallel file system is configured to communicate with a plurality of object storage servers and with a plurality of other compute nodes over a network. A plurality of processes executing on the plurality of compute nodes generate a shared file. The compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file and metadata for the at least one portion of the shared file on one or more of the plurality of object storage servers. The compute node is further configured to store the metadata by striping the metadata across a plurality of subdirectories of the shared file.
In one exemplary embodiment, the metadata is striped across the plurality of subdirectories in a round-robin manner. The plurality of subdirectories are stored on one or more of the object storage servers. Write and read processes optionally communicate using a message passing interface. A given write process writes metadata for a given portion of the shared file to an index file in a particular one of the subdirectories corresponding to the given portion.
Advantageously, illustrative embodiments of the invention write data from a group of distributed processes to a shared file using a parallel log-structured file system. Metadata processing operations in accordance with aspects of the present invention reduce data processing and transfer bandwidth costs and preserve valuable disk space. These and other features and advantages of the present invention will become more readily apparent from the accompanying drawings and the following detailed description.
Illustrative embodiments of the present invention will be described herein with reference to exemplary parallel file systems and associated clients, servers, storage arrays and other processing devices. It is to be appreciated, however, that the invention is not restricted to use with the particular illustrative parallel file system and device configurations shown. Accordingly, the term “parallel file system” as used herein is intended to be broadly construed, so as to encompass, for example, distributed file systems, cluster file systems, and other types of file systems implemented using one or more clusters of processing devices.
As indicated above, one challenge in a parallel file system when a plurality of distributed processes write to a shared file, is the amount of metadata that must be stored and processed. Aspects of the present invention recognize that the logging of data in a parallel file system improves data bandwidth but creates excessive metadata. According to one aspect of the present invention, metadata is striped to reduce the metadata lookup time as well as the metadata memory footprint. For example, big data and cloud environments are beginning the inevitable convergence with high performance computing (HPC) since cloud compute nodes are likely to be less powerful than typical HPC compute nodes. In one exemplary embodiment, the sharding of metadata in accordance with the present invention is integrated with flash-based HPC burst buffer nodes positioned on the edge of the cloud to reduce any performance cost associated with multiple metadata lookups that may become necessary if the striped metadata is cached only for a subset of the stripes.
While the present invention is illustrated in the context of a PLFS file system, the present invention can be employed in any parallel file system that employs extensive data mapping metadata.
One or more of the devices in
The parallel file system 100 may be embodied as a parallel log-structured file system (PLFS). The parallel log structured file system (PLFS) may be based on, for example, John Bent et al., “PLFS: A Checkpoint Filesystem for Parallel Applications,” Int'l Conf. for High Performance Computing, Networking, Storage and Analysis 2009 (SC09) (November 2009), incorporated by reference herein.
Storage arrays utilized in the parallel file system 100 may comprise, for example, storage products such as VNX and Symmetrix VMAX, both commercially available from EMC Corporation of Hopkinton, Mass. A variety of other storage products may be utilized to implement at least a portion of the object storage targets of the parallel file system 100.
The network may comprise, for example, a global computer network such as the Internet, a wide area network (WAN), a local area network (LAN), a satellite network, a telephone or cable network, a cellular network, a wireless network such as WiFi or WiMAX, or various portions or combinations of these and other types of networks. The term “network” as used herein is therefore intended to be broadly construed, so as to encompass a wide variety of different network arrangements, including combinations of multiple networks possibly of different types.
The object storage servers 104 may optionally be arranged into a plurality of storage tiers, in a known manner. As noted above, each of the storage devices 105 may be viewed as being representative of an object storage target of the corresponding one of the object storage servers 104.
Also, although two object storage targets 105 are associated with each object storage server 104 in the
The parallel file system 100 may be implemented, by way of example, in the form of a Lustre file system, although use of Lustre is not a requirement of the present invention. Accordingly, servers 104 need not be configured with Lustre functionality, but may instead represent elements of another type of cluster file system.
In the parallel file system 100 of
In the exemplary embodiment of
As indicated above, PLFS is a virtual log-structured file system that allows data to be written quickly in such parallel file systems 100. PLFS is particularly useful when multiple applications on compute nodes 150 write concurrently to a shared file. One challenge, however, as noted above, is that the amount of metadata required to be read data back from PLFS can be extremely large.
When an application on a compute node 150 writes to a shared file, a PLFS library 130 on the compute node 150 translates the write operation into a write to a given sub-file or data portion 110. The PLFS library 130 interacts with the exemplary Lustre file system and applications running on the compute nodes 150.
As shown in
The PLFS library 130 also creates metadata 120-1 through 120-N associated with each corresponding data portion 110-1 through 110-N that must be stored along with the corresponding data portion 110-1 through 110-N. Metadata is created for each data portion (sub-file) 110 to indicate where the data is stored. The metadata 120 comprises, for example, a logical offset, a physical offset, a length, a file (datalog) identifier, as well as timestamps for start and end times. The metadata is resolved when the shared file is read. One challenge, however, is that the amount of metadata required to be read data back can be extremely large. Each reading process must read all of the metadata that was created by all of the writing processes. Thus, the PLFS library 130 on each compute node 150 must keep an image of the entire metadata 120 corresponding to all data portions 110-1 through 110-N of a given shared file. The metadata 120-1 through 120-N is also stored by the OSSs 104 on the OSTs 105.
In addition, if multiple write processes on different compute nodes 150 write overlapping regions 110 in a shared file, then the PLFS metadata 120 contains stale entries that are still unnecessarily obtained when the read index is constructed.
These and other drawbacks of conventional arrangements are addressed by aspects of the present invention by striping the PLFS metadata across a plurality of subdirectories. Aspects of the present invention recognize that one benefit of PLFS is the logging of data for a non-deterministic placement of data, but at the expense of the significant logged metadata. Meanwhile, other types of file systems advantageously stripe stored data across storage nodes (typically in a round-robin manner, for a deterministic placement of data) and have significantly less metadata. Thus, aspects of the present invention provide a hybrid solution, whereby data is logged in PLFS and the metadata is striped. Generally, a comparable amount of metadata is required as the conventional approach, but only one stripe of metadata corresponding to the desired data needs to be accessed on a read of the corresponding data. In this manner, the PLFS metadata 120 is striped and then the necessary portions of metadata are read, as needed.
As will be described, such arrangements advantageously allow for more efficient storage of metadata for a shared file in a parallel file system without significant changes to object storage servers, or applications running on those devices.
On a read operation, all of the index logs (indx) must be read to build a global index across the entire file (foo in
On a read operation, only the index logs for the target stripe are obtained. For example, if the second gigabyte of a file is desired, only the metadata in stripe1 needs to be accessed. The index files in the exemplary directory 300 can be cached and evicted as needed, to reduce the minimum amount of PLFS metadata needed to be consulted from the entire file to that for only a single stripe.
The total number of index log files in the exemplary directory 300 is larger than the total number of index log files in the directory 200 of
In this manner, shared writes are decoupled, with a similar data distribution as the conventional solution of
In the exemplary embodiment of
Among other benefits, the use of MPI communications in the manner described above results in only one index file per stripe. In addition, the MPI rank process can optionally buffer the metadata to collapse and remove any stale metadata. This also means that the index “log” will actually be a contiguous “flattened” set of index entries that will speed the ingest. Each MPI rank hosting a stripe will create the stripe subdirectory and can read any existing stripe metadata if this isn't a newly created file. Further, the same distribution of stripes to ranks can be done on a read and each rank can load the index for that stripe and serve lookups from other ranks with a spawned listener thread. Since the index metadata will be distributed across ranks, the index metadata should not ever need to be evicted and then re-constructed.
An existing implementation for a PLFS read operation is discussed, for example, at https://github com/plfs/plfs-core/blob/2.4/src/LogicalFS/PLFSIndex.cpp, incorporated by reference herein.
It is noted that there need not be separate plfs_write_open and plfs_read_open calls, as discussed herein for illustrative purposes. Among other benefits, aspects of the present invention enable the convergence of big data and HPC by sharding metadata and logging data in large parallel storage systems.
Numerous other arrangements of servers, computers, storage devices or other components are possible. Such components can communicate with other elements over any type of network, such as a wide area network (WAN), a local area network (LAN), a satellite network, a telephone or cable network, or various portions or combinations of these and other types of networks.
It is to be appreciated that the particular operations and associated messaging illustrated in
As indicated previously, components of a compute node 150 having exemplary PLFS software as disclosed herein can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device. A memory having such program code embodied therein is an example of what is more generally referred to herein as a “computer program product.”
The processing device 1202-1 in the processing platform 1200 comprises a processor 1210 coupled to a memory 1212. The processor 1210 may comprise a microprocessor, a microcontroller, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other type of processing circuitry, as well as portions or combinations of such circuitry elements, and the memory 1212, which may be viewed as an example of a “computer program product” having executable computer program code embodied therein, may comprise random access memory (RAM), read-only memory (ROM) or other types of memory, in any combination.
Also included in the processing device 1202-1 is network interface circuitry 1214, which is used to interface the processing device with the network 1204 and other system components, and may comprise conventional transceivers.
The other processing devices 1202 of the processing platform 1200 are assumed to be configured in a manner similar to that shown for processing device 1202-1 in the figure.
Again, the particular processing platform 1200 shown in
It should again be emphasized that the above-described embodiments of the invention are presented for purposes of illustration only. Many variations and other alternative embodiments may be used. For example, the techniques are applicable to a wide variety of other types of devices and systems that can benefit from the replicated file system synchronization techniques disclosed herein. Also, the particular configuration of system and device elements shown in
Number | Name | Date | Kind |
---|---|---|---|
8176246 | Jernigan, IV | May 2012 | B1 |
8898206 | Jiang | Nov 2014 | B1 |
20030204670 | Holt | Oct 2003 | A1 |
20040024963 | Talagala | Feb 2004 | A1 |
20060101025 | Tichy | May 2006 | A1 |
20070156763 | Liu | Jul 2007 | A1 |
20100235413 | Patel | Sep 2010 | A1 |
20110016353 | Mikesell | Jan 2011 | A1 |
20110040810 | Kaplan | Feb 2011 | A1 |
20120096237 | Punkunus | Apr 2012 | A1 |
20130159364 | Grider | Jun 2013 | A1 |
20130179481 | Halevy | Jul 2013 | A1 |
20140108707 | Nowoczynski | Apr 2014 | A1 |
20140108723 | Nowoczynski | Apr 2014 | A1 |
20150277802 | Oikarinen | Oct 2015 | A1 |
Entry |
---|
Bent et al, “PLFS: A Checkpoint Filesystem for Parallel Applications”, SC09 Portland, OR, Nov. 14-20, 2009, pp. 1-12. |
Brett et al, “Lustre and PLFS Parallel I/O Performance on a Cray XE6”, Cray User Group Lugano, Switzerland, May 4-8, 2014, pp. 1-33. |
Cranor et al., “Structuring PLFS for Extensibility”, PDSW13, pp. 20-26, Denver, CO, Nov. 18, 2013. |