The present invention relates to data distribution in an analytics cluster. More specifically, the invention relates to directing data from a source analytics cluster to a target analytics cluster sensitive to performance locality.
In an analytics cluster, data is typically stored in a local storage file system. Each node in the analytics cluster has a local storage file system. Data communicated in and out of the cluster flows through one or more head nodes. Details of the architecture of the cluster, including the quantity of servers, network topology, etc., are not visible to an external source. All communications with the cluster are directed through the head node(s), and from the head node(s) through to the supporting compute node(s) of the cluster. Specifically, the prior art head nodes process the read and write requests so that all of the data for the request is processed through the head node. Efficiency of the request is limited to the space and processing capacity on the head node. Accordingly, the head node(s) of the cluster prevent direct read and write transactions on compute nodes from an external source.
This invention comprises a method, system, and article for supporting direct I/O access for read and write transactions with an analytics cluster.
In one aspect, supporting read and write transactions within an analytics cluster are supported. The analytics cluster includes a plurality of regions being designated by performance locality, each region having one or more compute nodes. At least one head node supports each region. Data is directed to support communication to one of the plurality of compute nodes in at least one region. This direction distributes the data to the cluster. The data communication may be in the form of a read transaction or a write transaction. For a write transaction, resource consumption in the head node is minimized. Similarly, for a read transaction, access to an I/O request is directed to a specific head node of a select region. Data is transferred responsive to the data direction. Accordingly, read and write transactions in an analytics cluster are supported through distribution of data.
Other features and advantages of this invention will become apparent from the following detailed description of the presently preferred embodiment of the invention, taken in conjunction with the accompanying drawings.
The drawings referenced herein form a part of the specification. Features shown in the drawings are meant as illustrative of only some embodiments of the invention, and not of all embodiments of the invention unless otherwise explicitly indicated.
It will be readily understood that the components of the present invention, as generally described and illustrated in the Figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the apparatus, system, and method of the present invention, as presented in the Figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention.
Reference throughout this specification to “a select embodiment,” “one embodiment,” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “a select embodiment,” “in one embodiment,” or “in an embodiment” in various places throughout this specification are not necessarily referring to the same embodiment.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of a profile manager, a cluster manager, a partition manager, a merge manager, an activity manager, an assignment manager, etc., to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
The illustrated embodiments of the invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and processes that are consistent with the invention as claimed herein.
The functional unit(s) described in this specification has been labeled with tools in the form of managers. A manager may be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like. The managers may also be implemented in software for processing by various types of processors. An identified manager of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions which may, for instance, be organized as an object, procedure, function, or other construct. Nevertheless, the executable of an identified manager need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the managers and achieve the stated purpose of the managers.
Indeed, a manager of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different applications, and across several memory devices. Similarly, operational data may be identified and illustrated herein within the manager, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, as electronic signals on a system or network.
A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure comprising a network of interconnected nodes. Referring now to
Computer system/server (112) may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system/server (112) may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
As shown in
System memory (128) can include computer system readable media in the form of volatile memory, such as random access memory (RAM) (130) and/or cache memory (132). Computer system/server (112) may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system (134) can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus (18) by one or more data media interfaces. As will be further depicted and described below, memory (28) may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
Program/utility (140), having a set (at least one) of program modules (142), may be stored in memory (128) by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating systems, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules (142) generally carry out the functions and/or methodologies of embodiments of the invention as described herein.
Computer system/server (112) may also communicate with one or more external devices (114), such as a keyboard, a pointing device, a display (124), etc.; one or more devices that enable a user to interact with computer system/server (112); and/or any devices (e.g., network card, modem, etc.) that enable computer system/server (112) to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces (122). Still yet, computer system/server (112) can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter (120). As depicted, network adapter (120) communicates with the other components of computer system/server (112) via bus (118). It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server (112). Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.
Referring now to
Referring now to
Virtualization layer (362) provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers; virtual storage; virtual networks, including virtual private networks; virtual applications and operating systems; and virtual clients.
In one example, management layer (364) may provide the following functions: resource provisioning, metering and pricing, user portal, service level management, and SLA planning and fulfillment. The functions are described below. Resource provisioning provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and pricing provides cost tracking as resources that are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may comprise application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal provides access to the cloud computing environment for consumers and system administrators. Service level management provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment provides pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA.
Workloads layer (366) provides examples of functionality for which the cloud computing environment may be utilized. An example of workloads and functions which may be provided from this layer includes, but is not limited to, organization and management of data objects within the cloud computing environment. In the shared pool of configurable computer resources described herein, hereinafter referred to as a cloud computing environment, files may be shared among users within multiple data centers, also referred to herein as data sites. A series of mechanisms are provided within the shared pool to provide organization and management of data storage. A computer storage system provided within shared pool of resources contains multiple levels known as storage tiers. Each storage tier is arranged within a hierarchy and is assigned a different role within the hierarchy. It should be understood that this hierarchically organized storage system maintains a flexible tier definition, such that tiers can be managed as a singleton on every node or tiers can be managed globally across all or a subset of the nodes in the system.
An analytics cluster employs compute nodes to support read and write transactions. Within the cluster, the compute nodes may be organized into regions, with each region having a minimum of one compute cluster. The compute node may be a hardware machine or a virtual machine.
Each head node (512), (522), (532), and (542) is in communication with the compute node(s) in their respective regions. For illustrative purposes, each region is shown with two compute nodes, although in one embodiment each region may be configured with a minimum of one compute node, or a plurality of compute nodes.
As shown head node (512) is in communication with compute nodes (550) and (552) in regiono (510); head node (522) is in communication with compute nodes (560) and (562) in region1 (520); head node (532) is in communication with compute nodes (570) and (572) in region2 (530); and head node (542) is in communication with compute nodes (580) and (582) in region3 (540). In the multi-region cluster, the multiple head nodes are supported by a head node manager (590) and a direction manager (592). The head node manager (590) determines a list of available head nodes in the cluster to support the request. For each file or directory, the head node manager (590) returns a mapping of the directory to a head node or a mapping of byte ranges and their associated head node. The functionality of the direction manager (592) is an expanded form of the single region direction manager (450), with the direction manager (592) to determine a region or a compute node to support the request. The file access client can be executed in one of several different places, including an analytics cluster head node, or a node outside of the analytics cluster. In one embodiment, the data transfer in support of the request may be from one analytics cluster to another analytics cluster, wherein the file access client may be one of the head nodes or a node outside of both clusters. Accordingly, the head node manager (590) functions as a first point of communication for external file access clients that request to read or write data to the cluster, e.g. at least one region within the cluster.
Congestion within a head node of an analytics cluster is reduced by a reduction in the work load of the head node.
As shown, initially, a data access request for a dataset is received (602) by a head node manager in an analytics cluster. The head node manager returns the head node layout back to the requesting client (604). The head node layout is a set of head nodes the requesting client will use for its read request. The requesting client then issues a read request to the head node as determined by the head node layout (606). The head node communicates with the direction manager to determine which sub-region or compute node(s) should be employed to support the read request (608). It is then determined if the direction manager has chosen a sub-region or one or more compute nodes to support the request (610). If it is determined that the direction manager has selected a sub-region, the request is forwarded to the head node manager for the sub-region (612) followed by a return to step (604). However, if at step (610) it is determined that the direction manager has selected one or more compute nodes, the request is forwarded to the selected compute node(s) (614). Data is transferred from the designated compute node(s) directly to the requesting client (616), while passing back through the head node. In one embodiment, the data transfer accounts for one or more semi-autonomous storage regions in communication with the head node, and delegates direction to a selected storage region. Accordingly, the read request is supported by a direct communication between the requester and the final destination compute node(s) within the cluster.
As shown in
As described above, the cluster may be segregated into regions, with each region having at least one compute node. The regions may be organized based on various characteristics, including a hierarchical organization, administrative domain, workload characteristic, or physical characteristic of the selected node. In one embodiment, the nodes are separated into regions based on performance locality. Regardless of the structure, the head node manager and the direction manager function to ascertain the region(s) and compute node(s) to support the request. Accordingly, the compute nodes may be organized on a multi-dimensional basis, with the organization enabling efficient communication of data between the compute node(s) and the requesting client.
The analytics cluster supports read requests, as demonstrated in
If it is determined that the direction manager has selected a sub-region, the request is forwarded to the head node manager for the sub-region (712) followed by a return to step (704). However, if at step (710) it is determined that the direction manager has selected one or more compute nodes, the request is directly forwarded from the requesting client to the designated compute node(s) (714). The forwarding of the write request is directed through any head nodes of the subject sub-regions without buffering data in the head node(s). In one embodiment, the head node functions as a proxy. Accordingly, compute node(s) to support the write request are located, and the write request is a direct transfer of data from the client to the designated compute node(s) absent any buffering in the head node(s).
If at step (708) it is determined that the cluster includes multiple sub-regions of compute nodes, the direction manager ascertains the sub-region to support the write request. As articulated above, the sub-regions within the cluster may be organized based on various characteristics, including a hierarchical organization, workload characteristic, physical, or runtime characteristic of a selected compute node, and the nodes may be separated into sub-regions based on performance locality. The selection of specific compute nodes may be based on workload characteristics and/or physical attributes of the cluster. Accordingly, in a multiple sub-region cluster, if the direction manager determines the appropriate location for the write request is a sub-region, then the head node contacts the sub-regions head node manager to determine the head nodes through which to forward the write request. At this point the entire process starts over again from the beginning. Eventually a direction manager in one of the regions determines the compute node(s) to support the write request, and the write request is forwarded directly to the designated compute node(s) absent any buffering in the head node(s). The head node(s) store information received from the direction manager until it is invalid. Therefore, when a head node receives any further write requests that are covered by this information no further communication with the direction manager is needed. The head node just forwards the write request to the correct compute node or sub-region.
As shown in
As demonstrated, direction of read and write requests mitigates resources of the head node(s). Requests are directed to the compute node(s), or routed to the compute node(s). As shown, within the analytics cluster a hierarchical network topology may exist. Regardless of the position of the designated compute node(s) within the hierarchy, data packets are forwarded through nodes as necessary. With respect to the hierarchical organization of the region(s) and or compute node(s), the head node for each region understands the topology (via the redirection manager(s)) of the compute nodes within each region. The head node(s) account for network topology to support read and write requests. The cluster may contain semi-autonomous storage regions, with each region making decisions on how to layout data across the member compute nodes. However, regardless of the cluster architecture, the application layer as shown herein avoids inefficient protocol translation on the head nodes, and supports network efficiency.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowcharts or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated. Accordingly, the enhanced cloud computing model supports flexibility with respect to transaction processing, including, but not limited to, optimizing the storage system and processing transactions responsive to the optimized storage system.
It will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without departing from the spirit and scope of the invention. In one embodiment, read requests are gathered together in a buffer, and a response is sent out to the client only once the read request is satisfied. The buffer supports a direct transfer of data between a requesting node and back end storage. The direct transfer is a series of steps to support the request without buffering data. In one embodiment, the head node layout is stored directly on a particular head node, thereby mitigating the need for a head node manager. Similarly, in one embodiment, the data transfer is a parallel data transfer with the head node manager for a region returning a layout which includes multiple head nodes to support the request. Use of file access protocols may be employed to read and write different byte ranges of a file from and to different head and compute nodes. Accordingly, the scope of protection of this invention is limited only by the following claims and their equivalents.