Data analytics platform over parallel databases and distributed file systems

Information

  • Patent Grant
  • 10838960
  • Patent Number
    10,838,960
  • Date Filed
    Wednesday, November 22, 2017
    6 years ago
  • Date Issued
    Tuesday, November 17, 2020
    4 years ago
  • CPC
  • Field of Search
    • CPC
    • G06F17/30445
    • G06F17/30067
    • G06F17/3007
    • G06F17/30106
    • G06F17/30194
    • G06F17/30224
    • G06F17/30283
    • G06F17/30433
    • G06F17/30442
    • G06F17/30463
    • G06F17/30477
    • G06F17/30545
    • G06F17/30997
    • G06F17/30023
    • G06F17/30073
    • G06F17/30306
    • G06F17/30424
    • G06F16/24542
    • G06F16/2471
    • G06F16/27
    • G06F16/907
    • G06F16/182
    • G06F16/148
    • G06F16/2455
    • G06F16/24524
    • G06F16/1858
    • G06F16/2453
    • G06F16/24532
    • G06F16/10
    • G06F16/11
    • G06F16/217
    • G06F16/245
    • G06F16/113
    • H04L65/60
    • H04L67/1097
  • International Classifications
    • G06F16/24
    • G06F16/2453
    • G06F16/10
    • G06F16/11
    • G06F16/27
    • G06F16/14
    • G06F16/182
    • G06F16/907
    • G06F16/18
    • G06F16/2455
    • G06F16/2458
    • G06F16/2452
    • H04L29/08
    • H04L29/06
    • G06F16/43
    • G06F16/21
    • G06F16/245
    • Disclaimer
      This patent is subject to a terminal disclaimer.
Abstract
Performing data analytics processing in the context of a large scale distributed system that includes a massively parallel processing (MPP) database and a distributed storage layer is disclosed. In various embodiments, a data analytics request is received. A plan is created to generate a response to the request. A corresponding portion of the plan is assigned to each of a plurality of distributed processing segments, including by invoking as indicated in the assignment one or more data analytical functions embedded in the processing segment.
Description
BACKGROUND OF THE INVENTION

Distributed storage systems enable databases, files, and other objects to be stored in a manner that distributes data across large clusters of commodity hardware. For example, Hadoop® is an open-source software framework to distribute data and associated computing (e.g., execution of application tasks) across large clusters of commodity hardware.


EMC Greenplum® provides a massively parallel processing (MPP) architecture for data storage and analysis. Typically, data is stored in segment servers, each of which stores and manages a portion of the overall data set. Advanced MPP database systems such as EMC Greenplum® provide the ability to perform data analytics processing on huge data sets, including by enabling users to use familiar and/or industry standard languages and protocols, such as SQL, to specify data analytics and/or other processing to be performed. Examples of data analytics processing include, without limitation, Logistic Regression, Multinomial Logistic Regression, K-means clustering, Association Rules based market basket analysis, Latent Dirichlet based topic modeling, etc.


While distributed storage systems, such as Hadoop®, provide the ability to reliable store huge amounts of data on commodity hardware, such systems have not to date been optimized to support data mining and analytics processing with respect to the data stored in them.





BRIEF DESCRIPTION OF THE DRAWINGS

Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings.



FIG. 1 is a block diagram illustrating an embodiment of a large scale distributed system.



FIG. 2 is a block diagram illustrating an embodiment of a data analytics architecture of a large scale distributed system.



FIG. 3 is a flow chart illustrating an embodiment of a database query processing process.



FIG. 4 is a block diagram illustrating an embodiment of a segment server.



FIG. 5 is a flow chart illustrating an embodiment of a process to perform data analytics processing.





DETAILED DESCRIPTION

The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.


A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.


Providing advanced data analytics capabilities in the context of a large distributed data storage system is disclosed. In various embodiments, a massively parallel processing (MPP) database system is adapted to manage and provide data analytics with respect to data stored in a large distributed storage layer, e.g., an implementation of the Hadoop® distributed storage framework. Examples of data analytics processing include, without limitation, Logistic Regression, Multinomial Logistic Regression, K-means clustering, Association Rules based market basket analysis, Latent Dirichlet based topic modeling, etc. In some embodiments, advanced data analytics functions, such as statistical and other analytics functions, are embedded in each of a plurality of segment servers comprising the MPP database portion of the system. In some embodiments, to perform a data analytics task, such as computing statistics, performing an optimization, etc., a master node selects a subset of segments to perform associated processing, and sends to each segment an indication of the data analytics processing to be performed by that segment, including for example an identification of the embedded data analytics function(s) to be used, and associated metadata required to locate and/or access the subset of data on which that segment is to perform the indicated processing.



FIG. 1 is a block diagram illustrating an embodiment of a large scale distributed system. In the example shown, the large scale distributed system includes a large cluster of commodity servers. The master hosts include a primary master 102 and a standby master 104. The primary master 102 is responsible for accepting queries; planning queries, e.g., based at least in part on system metadata 106, which in various embodiments includes information indicating where data is stored within the system; dispatching queries to segments for execution; and collecting the results from segments. The standby master 104 is a warm backup of the primary master 102. The network interconnect 108 is used to communicate tuples between execution processes. The compute unit of the database engine is called a “segment”. Each of a large number of segment hosts, represented in FIG. 1 by hosts 110, 112, and 114, can have multiple segments. The segments on segment hosts 110, 112, 114, for example, are configured to execute tasks assigned by the primary master 102, such as to perform assigned portions of a query plan with respect to data stored in distributed storage layer 116, e.g., a Hadoop® or other storage layer.


When the master node 102 accepts a query, it is parsed and planned according to the statistics of the tables in the query, e.g., based on metadata 106. After the planning phase, a query plan is generated. A query plan is sliced into many slices. In the query execution phase, for each slice a group of segments, typically comprising a subset of the segments hosted on segment hosts 1 through s, is selected to execute the slice. In various embodiments, the size of the group may be dynamically determined by using the knowledge of the data distribution and available resources, e.g., workload on respective segments, etc.


In various embodiments, a data analytics job or other query may be expressed in whole or in part using SQL and/or any other specified language or syntax. A master node, such as primary master 102, parses the SQL or other input and invokes scripts or other code available on the master to perform top level processing to perform the requested processing. In various embodiments, a query plan generated by the master 102, for example, may identify for each of a plurality of segments a corresponding portion of the global data set to be processed by that segment. Metadata identifying the location of the data to be processed by a particular segment, e.g., with distributed storage layer 116, is sent to the segment by the master 102. In various embodiments, the distributed storage layer 116 comprises data stored in an instance of the Hadoop Distributed File System (HDFS) and the metadata indicates a location within the HDFS of data to be processed by that segment. The master 102 in addition will indicate to the segment the specific processing to be performed. In various embodiments, the indication from the master may indicate, directly or indirectly, one or more analytics functions embedded at each segment which is/are to be used by the segment to perform the required processing.



FIG. 2 is a block diagram illustrating an embodiment of a data analytics architecture of a large scale distributed system. In various embodiments, the data analytics architecture 200 of FIG. 2 is implemented in a large scale distributed system, such as the large scale distribute system of FIG. 1. In the example shown, the data analytics architecture 200 includes a user interface 202 that enables data analytics requests to be expressed using SQL, e.g., as indicated by a specification. Various driver functions 204, e.g., python or other scripts with templated SQL in this example, may be invoked to perform, for example, the outer loops of iterative algorithms, optimizer invocations, etc. A high level abstraction layer 206, in this example also comprising python scripts, provides functionality such as an iteration controller, convex optimizers, etc. The upper layers 202, 204, and 206 interact with RDBMS built-in functions 208 and/or with inner loops 210 and/or low-level abstraction layer 212, comprising compiled C++ in this example, to perform lower level tasks required to perform a task received via user interface 202. Data is accessed to perform analytics computations and/or other processing by interacting with an underlying RDBMS query processing layer 214. In various embodiments, one or more of the components shown in FIG. 2 may be implemented across nodes comprising the system, such as across the segments or other processing units comprising the MPP database portion of a large scale distributed system such as the one shown in FIG. 1. In various embodiments, core data analytics processing is performed at least in part using functions embedded in each of the segments (or other processing units) included in the system. In some embodiments, the functions comprise a “shared object” or library of functions comprising compiled C++ or other compiled code, such as Java or Fortran. As a portion of a broader task is assigned to a segment, the segment uses the embedded function(s) implicated by the assignment to perform at least part the data analytics and/or other processing that has been assigned to the segment.



FIG. 3 is a flow chart illustrating an embodiment of a database query processing process. In some embodiments, a master node, such as primary master 102 of FIG. 1, implements the process of FIG. 3. In the example shown, a query is received (302). Examples of a query include, without limitation, an advanced data analytics request expressed in whole or in part as a set of SQL statements. A query plan is generated (304). The plan is divided into a plurality of slices, and for each slice a corresponding set of segments (“gang”) is identified to participate in execution of that slice of the query plan (306). For each slice of the query plan, the segments selected to perform processing required by that slice are sent a communication that includes both the applicable portion of the plan to be performed by that segment and metadata that may be required by a receiving segment to perform tasks assigned to that segment (308). In some embodiments, the metadata included in the query plan slice and/or other communication sent to the respective segments selected to participate in execution of that slice of the plan includes metadata from a central metadata store, e.g., metadata 106 of FIG. 1, and includes information indicating to the segment the location of data with respect to which that segment is to perform query plan slice related processing. In past approaches, typically a segment would store and manage a corresponding portion of the overall data, and sending metadata to perform query plan related tasks would not typically have been necessary. In some embodiments, metadata and/or other data included in assignments sent to selected segments may indicate data analytics processing to be performed, in whole or in part, by the segment using one or more data analytics functions that have been embedded in each of the segments in the distributed system. Query results are received from the respective segments to which query tasks were dispatched, and processed to generate, e.g., at the master node, a master or overall response to the query (310).



FIG. 4 is a block diagram illustrating an embodiment of a segment server. In various embodiments, one or more segment servers such as segment server 402 may be deployed in each of a plurality of segment hosts, such as segment hosts 110, 112, and 114 of FIG. 1. In the example shown, the segment server 402 includes a communication interface 404 configured to received, e.g., via a network interconnect such as interconnect 108 of FIG. 1, a network communication comprising an assignment sent by a master node such as primary master 102 of FIG. 1. A query executor 406 performs processing required to complete tasks assigned by the master node, using in this example a storage layer interface 408 to access data stored in a distributed storage layer, such as distributed storage layer 116 of FIG. 1. One or more data analytics functions included in a shared data analytics library 410 embedded in each segment server in the distributed system may be called to perform data analytics processing, as required to perform the assigned task. Examples of functions that may be embedded in segment servers in various embodiments include, without limitation: User-Defined Functions (e.g. a UDF which randomly initializes an array with values in a specified range, a UDF which transposes a matrix, a UDF which un-nests a 2-dimensional array into a set of 1-dimensional arrays, etc.), step functions, and final functions of various User-Defined Aggregators.



FIG. 5 is a flow chart illustrating an embodiment of a process to perform data analytics processing. In various embodiments, the process of FIG. 5 is performed by a segment server in response to receiving an assignment, e.g., from a master node, to perform an assigned part of a data analytics query plan. In the example shown, an assigned task is received (502). Metadata embedded in the assigned task is used to access data as needed to perform the assigned task(s) (504). Data analytics functions embedded at the segment server or other processing unit are invoked as needed to perform the assigned task (506). Examples of functions that may be embedded in segment servers in various embodiments include, without limitation: a function which does Gibbs sampling for the inference of Latent Dirichlet Allocation, or a function which generates the association rules. Once processing has been completed, a result is returned, for example to the master node from which the assignment was received (508).


Using techniques disclosed herein, a scalable and high-performance data analytics platform can be provided over a high-performance parallel database system built upon a scalable distributed file system. The advantages of parallel databases and distributed file systems are combined to overcome the challenges of big data analytics. Finally, in various embodiments, users are able to use familiar SQL queries to run analytic tasks, and the underlying parallel database engines translate these SQL queries into a set of execution plans, optimized according to data locality and load balances.


Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.

Claims
  • 1. A method, comprising: receiving, by a master node, a query from a client node, the query being input according to one or more sets of Structured Query Language (SQL) statements;creating, by the master node, a plan to process the query;selecting, with respect to the plan, one or more selected processing segments from among a plurality of distributed processing segments and metadata associated with the plan to be used in connection with performing the plan, the one or more selected processing segments being selected to process a corresponding portion of the plan;obtaining, by the master node, the metadata associated with the plan, the metadata being obtained from a central metadata store;sending, by the master node, to each of the one or more selected processing segments, the corresponding portion of the plan to be processed by that corresponding processing segment and metadata associated with the plan, the metadata associated with the plan being sent in connection with the corresponding portion of the plan to be processed, wherein the metadata is used in connection with locating or accessing a subset of data on which the corresponding selected processing segment is to perform an indicated processing;receiving, from at least one of the one or more selected processing segments, a result of the portion of the plan processed by the corresponding processing segment; andgenerating, a master response to the query based at least in part on the result of the portion of the plan received from the at least one of the one or more selected processing segments,wherein: the creating the plan and selecting the one or more selected processing segments collectively comprise translating the query from the one or more SQL statements to the plan according to an optimization based on (i) data locality of data associated with the plan, and (ii) load balances among the plurality of distributed processing segments;the metadata or corresponding portion of the plan sent to each of the plurality of distributed processing segments for which a portion of the plan is assigned includes an indication of one or more data analytics function to be used to process the portion of the plan; andthe one or more selected processing segments correspondingly invoke the one or more data analytical functions respectively embedded in the one or more selected processing segments.
  • 2. The method of claim 1, wherein the metadata identifies a location data corresponding to one or more portions of the plan and at least a part of one or more data analytic processing to be performed in connection with processing the corresponding portion of the plan.
  • 3. The method of claim 1, wherein a request to process the query comprises one or more SQL statements.
  • 4. The method of claim 1, wherein a request to process the query comprises one or more SQL statements to compute one or more of the following: Logistic Regression, Multinomial Logistic Regression, K-means clustering, Association Rules based market basket analysis, and Latent Dirichlet based topic modeling.
  • 5. The method of claim 1, wherein a request to process the query is received at the master node, wherein the master node corresponds to a master node of a large scale distributed system.
  • 6. The method of claim 1, wherein the creating of the plan to process the query includes creating a query plan, slicing the query plan into a plurality of slices, and identifying for each slice a group of processing segments to perform tasks comprising that slice of the query plan.
  • 7. The method of claim 1, wherein each of the selected one or more processing segments is configured to use the metadata to access the data to be processed by that segment.
  • 8. The method of claim 1, wherein a request to process the query is received at the master node, wherein the master node corresponds to a master node of a large scale distributed system, and the large scale distributed system comprises a distributed data storage layer comprising data stored in an instance of a Hadoop Distributed File System (HDFS) and the metadata indicates a location within the HDFS of data to be processed by the corresponding processing segment of the one or more selected processing segments.
  • 9. The method of claim 1, further comprising: embedding in each of the plurality of distributed processing segments a library or other shared object comprising one or more data analytical functions, wherein the library or other shared object is included in the one or more selected processing segments as deployed.
  • 10. The method of claim 9, wherein the library or other shared object embodies the one or more data analytical functions in a form of one or more of: compiled C++ code, compiled Java, compiled Fortran, or other compiled code.
  • 11. The method of claim 1, wherein the plurality of distributed processing segments comprise a subset of parallel processing segments comprising a massively parallel processing (MPP) database system.
  • 12. The method of claim 1, wherein the one or more data analytics function includes a User-Defined function, a step function, or a final function of a User-Defined Aggregator.
  • 13. The method of claim 1, wherein the metadata sent to each of the one or more selected processing segments is sent in conjunction with the corresponding portion of the plan to be performed by that selected processing segment.
  • 14. The method of claim 13, wherein the metadata sent to each of the one or more selected processing segments is sent as part of the corresponding portion of the plan to be performed by that selected processing segment.
  • 15. The method of claim 1, wherein at least a portion of the metadata sent to each of the one or more selected processing segments is obtained from a central metadata store.
  • 16. The method of claim 1, wherein the one or more selected processing segments are selected based at least in part on a data distribution and available processing resources, and a number of the one or more selected processing segments is dynamic in relation to received queries.
  • 17. A system, comprising: a communication interface; andone or more processors coupled to the communication interface and configured to: receive a query from a client node, the query being input according to one or more sets of Structured Query Language (SQL) statements;create a plan to process the query;select, with respect to the plan, one or more of selected processing segments from among a plurality of distributed processing segments and metadata associated with the plan to be used in connection with performing the plan, the one or more selected processing segments being selected to process a corresponding portion of the plan;obtain the metadata associated with the plan, the metadata being obtained from a central metadata store;send, to each of the one or more selected processing segments, the corresponding portion of the plan to be processed by that corresponding processing segment and metadata associated with the plan, the metadata associated with the plan being sent in connection with the corresponding portion of the plan to be processed, wherein the metadata is used in connection with locating or accessing a subset of data on which the corresponding selected processing segment is to perform an indicated processing;receive, from at least one of the one or more selected processing segments, a result of the corresponding portion of the plan processed by the corresponding processing segment; andgenerate, a master response to the query based at least in part on the result of the corresponding portion of the plan received from the at least one of the one or more selected processing segments,wherein: to create the plan and to select the one or more selected processing segments collectively comprise translating the query from the one or more SQL statements to the plan according to an optimization based on (i) data locality of data associated with the plan, and (ii) load balances among the plurality of distributed processing segments;the metadata or corresponding portion of the plan sent to each of the plurality of distributed processing segments for which a portion of the plan is assigned includes an indication of one or more data analytics function to be used to process the portion of the plan; andthe one or more selected processing segments correspondingly invoke the one or more data analytical functions respectively embedded in the one or more selected processing segments.
  • 18. A computer program product embodied in a tangible, non-transitory computer readable storage medium, comprising computer instructions for: receiving, by a master node, a query from a client node, the query being input according to one or more sets of Structured Query Language (SQL) statements;creating a plan to processing the query;selecting, with respect to the plan, one or more selected processing segments from among a plurality of distributed processing segments and metadata associated with the plan to be used in connection with performing the plan, the one or more selected processing segments being selected to process a corresponding portion of the plan;obtaining the metadata associated with the plan, the metadata being obtained from a central metadata store;sending, to each of the one or more selected processing segments, the corresponding portion of the plan to be processed by that corresponding processing segment and metadata, the metadata associated with the plan being sent in connection with the corresponding portion of the plan to be processed, wherein the metadata is used in connection with locating or accessing a subset of data on which the corresponding selected processing segment is to perform an indicated processing;receiving, from at least one of the one or more selected processing segments, a result of processing the corresponding portion of the plan; andgenerating, a master response to the query based at least in part on the result of the corresponding portion of the plan received from the at least one of the one or more selected processing segments,wherein: the creating the plan and selecting the one or more selected processing segments collectively comprise translating the query from the one or more SQL statements to the plan according to an optimization based on (i) data locality of data associated with the plan, and (ii) load balances among the plurality of distributed processing segments;the metadata or corresponding portion of the plan sent to each of the plurality of distributed processing segments for which a portion of the plan is assigned includes an indication of one or more data analytics function to be used to process the portion of the plan; andthe one or more selected processing segments correspondingly invoke the one or more data analytical functions respectively embedded in the one or more selected processing segments.
CROSS REFERENCE TO OTHER APPLICATIONS

This application is a continuation of co-pending U.S. patent application Ser. No. 15/389,321, entitled DATA ANALYTICS PLATFORM OVER PARALLEL DATABASES AND DISTRIBUTED FILE SYSTEMS filed Dec. 22, 2016 which is incorporated herein by reference for all purposes, which is a continuation of U.S. patent application Ser. No. 13/840,912, entitled DATA ANALYTICS PLATFORM OVER PARALLEL DATABASES AND DISTRIBUTED FILE SYSTEMS filed Mar. 15, 2013, now U.S. Pat. No. 9,563,648, which is incorporated herein by reference for all purposes, which claims priority to U.S. Provisional Application No. 61/769,043, entitled INTEGRATION OF MASSIVELY PARALLEL PROCESSING WITH A DATA INTENSIVE SOFTWARE FRAMEWORK filed Feb. 25, 2013 which is incorporated herein by reference for all purposes.

US Referenced Citations (91)
Number Name Date Kind
5933422 Kusano Aug 1999 A
6957222 Ramesh Oct 2005 B1
7599969 Mignet Oct 2009 B2
7653665 Stefani Jan 2010 B1
7702676 Brown Apr 2010 B2
7743051 Kashyap Jun 2010 B1
7885953 Chen Feb 2011 B2
7906259 Hayashi Mar 2011 B2
7908242 Achanta Mar 2011 B1
7921130 Hinshaw Apr 2011 B2
7984043 Waas Jul 2011 B1
8171018 Zane May 2012 B2
8266122 Newcombe Sep 2012 B1
8359305 Burke Jan 2013 B1
8572051 Chen Oct 2013 B1
8645356 Bossman Feb 2014 B2
8713038 Cohen Apr 2014 B2
8805870 Chen Aug 2014 B2
8868546 Beerbower Oct 2014 B2
8935232 Abadi Jan 2015 B2
8990335 Fauser Mar 2015 B2
9110706 Yu Aug 2015 B2
9235396 Ke Jan 2016 B2
9626411 Chang Apr 2017 B1
9639575 Leida May 2017 B2
20030037048 Kabra Feb 2003 A1
20030212668 Hinshaw et al. Nov 2003 A1
20040030739 Yousefi'zadeh Feb 2004 A1
20040073549 Turkel Apr 2004 A1
20040095526 Yamabuchi May 2004 A1
20040186842 Wesemann Sep 2004 A1
20050289098 Barsness Dec 2005 A1
20060224563 Hanson Oct 2006 A1
20070050328 Li Mar 2007 A1
20080059489 Han Mar 2008 A1
20080082644 Isard Apr 2008 A1
20080086442 Dasdan Apr 2008 A1
20080120314 Yang May 2008 A1
20080195577 Fan Aug 2008 A1
20080222090 Sasaki Sep 2008 A1
20080244585 Candea Oct 2008 A1
20090043745 Barsness Feb 2009 A1
20090182792 Bomma Jul 2009 A1
20090216709 Cheng Aug 2009 A1
20090234850 Kocsis Sep 2009 A1
20090254916 Bose Oct 2009 A1
20090271385 Krishnamoorthy Oct 2009 A1
20090292668 Xu Nov 2009 A1
20100088298 Xu Apr 2010 A1
20100114970 Marin May 2010 A1
20100198806 Graefe Aug 2010 A1
20100198807 Kuno Aug 2010 A1
20100198808 Graefe Aug 2010 A1
20100198809 Graefe Aug 2010 A1
20100223305 Park Sep 2010 A1
20100241827 Yu Sep 2010 A1
20100241828 Yu Sep 2010 A1
20100257198 Cohen Oct 2010 A1
20100332458 Xu Dec 2010 A1
20110047172 Chen Feb 2011 A1
20110131198 Johnson Jun 2011 A1
20110138123 Gurajada Jun 2011 A1
20110228668 Pillai Sep 2011 A1
20110231389 Surna Sep 2011 A1
20110246511 Smith Oct 2011 A1
20110302164 Krishnamurthy Dec 2011 A1
20120036146 Annapragada Feb 2012 A1
20120078973 Gerdes Mar 2012 A1
20120191699 George Jul 2012 A1
20120259894 Varley Oct 2012 A1
20130030692 Hagan Jan 2013 A1
20130031139 Chen Jan 2013 A1
20130054630 Briggs Feb 2013 A1
20130117237 Thomsen May 2013 A1
20130138612 Iyer May 2013 A1
20130166523 Pathak Jun 2013 A1
20130173716 Rogers Jul 2013 A1
20130346988 Bruno Dec 2013 A1
20140019683 Ishikawa Jan 2014 A1
20140067792 Erdogan Mar 2014 A1
20140095526 Harada Apr 2014 A1
20140108459 Gaza Apr 2014 A1
20140108861 Abadi Apr 2014 A1
20140122542 Barnes May 2014 A1
20140136590 Marty May 2014 A1
20140149355 Gupta May 2014 A1
20140149357 Gupta May 2014 A1
20140188841 Sun Jul 2014 A1
20140188884 Morris Jul 2014 A1
20140195558 Murthy Jul 2014 A1
20140201565 Candea Jul 2014 A1
Foreign Referenced Citations (3)
Number Date Country
102033889 Apr 2011 CN
WO2012050582 Apr 2012 WO
WO-2012124178 Sep 2012 WO
Non-Patent Literature Citations (1)
Entry
Brad Hedlund, “Understanding Hadoop Clusters and the Network”, Bradhedlund.com, 2011, pp. 1-22. Available at http://bradhedlund.com/2011/09/10/understanding-hadoop-clusters-and-the-network/.
Related Publications (1)
Number Date Country
20180373755 A1 Dec 2018 US
Provisional Applications (1)
Number Date Country
61769043 Feb 2013 US
Continuations (2)
Number Date Country
Parent 15389321 Dec 2016 US
Child 15821361 US
Parent 13840912 Mar 2013 US
Child 15389321 US