Sorting a large dataset is a problem commonly found in many applications. The total time required to sort a large dataset can be split into two parts: first, the input/output (I/O) delay in reading all the unsorted data from stable storage (e.g., disk) and writing the sorted data back. Second, there are CPU requirements for comparing enough of the data elements sufficiently to sort them.
The I/O portion of the sorting process is typically much slower than computation, particularly if the amount of computation done per unit of data is small. The time to sort data tends to be dominated by the time it takes to read or write the data from or to either the network or the storage medium (e.g. disk). This has changed in some recent storage systems, where I/O is dramatically faster than in previous systems—often by an order of magnitude. When sorting is implemented on such systems, the time required for computation becomes more significant, and it becomes more significant to optimize this portion of the sorting process.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
One embodiment is directed to system that splits unsorted input data into smaller subsets as it arrives, and sorts each input subset while the subsequent input subset is being read (or received, in the case of a network file system). The system according to one embodiment performs a merge sort on the sorted subsets once the output stage begins, and performs a merge to produce an output subset while the previous output subset is being written (or transmitted, in the case of a network file system).
One embodiment is directed to a method of sorting a dataset, which includes incrementally receiving data from the dataset, and incrementally storing the received data as individual input data subsets as the data is received, thereby sequentially generating a plurality of filled data subsets of unsorted data. The method includes individually sorting each filled data subset of unsorted data concurrently with receiving data for a next one of the individual input data subsets, thereby sequentially generating a plurality of sorted input data subsets, and performing a merge sort on the plurality of sorted input data subsets, thereby incrementally generating a sorted version of the dataset.
The accompanying drawings are included to provide a further understanding of embodiments and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments and together with the description serve to explain principles of embodiments. Other embodiments and many of the intended advantages of embodiments will be readily appreciated, as they become better understood by reference to the following detailed description. The elements of the drawings are not necessarily to scale relative to each other. Like reference numerals designate corresponding similar parts.
In the following Detailed Description, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present invention. The following detailed description, therefore, is not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims.
It is to be understood that features of the various exemplary embodiments described herein may be combined with each other, unless specifically noted otherwise.
In a naïve implementation, a program might be split the sorting process into three stages: (1) read unsorted data; (2) sort; (3) write sorted data. One embodiment of the system disclosed herein overlaps almost 100% of the compute time (step 2) with the time for reading (step 1) and the time for writing (step 3), reducing the total time for the second step to almost zero. Thus, the system hides the majority of the compute time for sorting by overlapping it with the time for I/O.
One embodiment is directed to system that splits unsorted input data into smaller subsets as it arrives, and sorts each input subset while the subsequent input subset is being read (or received, in the case of a network file system). The system according to one embodiment performs a merge sort on the sorted subsets once the output stage begins, and performs a merge to produce an output subset while the previous output subset is being written (or transmitted, in the case of a network file system).
One potential method for sorting is to use an incremental sorting mechanism like heap sort. Each time a datum arrives, it can be added to the heap. In this way, in theory at least, all data can be incrementally sorted as it arrives, and as soon as the last piece of data arrives the heap is entirely sorted and ready for output. However, it has been found that, in practice, this method is slow, because it does not exploit the locality of reference required for good performance in the CPU's memory cache. Thus, one embodiment incrementally sorts data using a quick sort, which is more cache-friendly.
Computing device 10 may also have additional features/functionality. For example, computing device 10 may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in
The various elements of computing device 10 are communicatively coupled together via one or more communication links 15. Computing device 10 also includes one or more communication connections 24 that allow computing device 10 to communicate with other computers/applications 26. Computing device 10 may also include input device(s) 22, such as keyboard, pointing device (e.g., mouse), pen, voice input device, touch input device, etc. Computing device 10 may also include output device(s) 20, such as a display, speakers, printer, etc.
Sorting device 208 incrementally reads or receives unsorted data from data portions 206 stored on the computing devices 204. As unsorted data is being received, it is separated into independent input data subsets 210(1)-210(X) (collectively referred to as input data subsets 210) by sorting device 208, where X is an integer greater than one. As unsorted data arrives at sorting device 208, it is added to a current input data subset 210, and once the current input data subset 210 fills, it is closed, and future unsorted data that arrives goes into the next input data subset 210. Each input data subset 210 according to one embodiment has a finite capacity (e.g., 1/100th or 1/1000th of the total size of the dataset 202 to be sorted). As each subset 210 is filled, it is sorted by sorting device 208 (referred to as a “subset-sort”), thereby generating respective sorted input data subsets 211(1)-211(X) (collectively referred to as sorted input data subsets 211). In one embodiment, all of the subset-sorts, except for the last subset-sort, are overlapped with the read of the data for the subsequent subset 210. Thus, the subset-sort for each current subset is performed while the subsequent subset is being filled. In one embodiment, each of the subset-sorts is performed using a quick-sort algorithm.
After the last subset 210(X) is closed, its data is subset-sorted, and then a merge-sort is performed on all of the sorted input data subsets 211 to produce a sorted dataset 212 in total sorted order. The time for performing this last subset-sort is not overlapped with I/O in one embodiment, but the amount of data in the last subset 210(X) is only a small fraction of the entire data set 202, so the subset-sort can be performed relatively quickly. The merge-sort incrementally generates (completely) sorted data from the (partially) sorted input data subsets 211. The merge-sort according to one embodiment involves repeatedly picking the smallest data element from the entire set of sorted input data subsets 211. In one embodiment, the sorted dataset 212 is divided into a plurality of sorted output data subsets 214(1)-214(Y), where Y is an integer greater than one. In one embodiment, the total number, X, of input data subsets 210 equals the total number, Y, of sorted output data subsets 214, and the input data subsets 210 have the same size (e.g., same number of data elements) as the sorted output data subsets 214. In other embodiments, the number and size of the input data subsets 210 may vary from that of the sorted output data subsets 214. In one embodiment, sorting device 208 adjusts the size of the input data subsets 210 and/or the sorted output data subsets 214 based on the size of the data set 202 (e.g., making these elements to be, for example, 1/100th or 1/1000th of the total size of the data set 202, so that these elements will be larger (i.e., contain a greater number of data elements) for a larger data set 202, and will be smaller (i.e., contain a smaller number of data elements) for a smaller data set 202.
In one embodiment, the input data subsets 210 have a uniform size, and in another embodiment have a non-uniform size. In one embodiment, the sorted output data subsets 214 have a uniform size, and in another embodiment have a non-uniform size. In one embodiment, sorting device 208 is configured to dynamically size the input data subsets 210 and the sorted output data subsets 214 during the sorting process.
After the first sorted output data subset 214(1) has been generated (e.g., after the first 1/100th or 1/1000th of the data in the sorted input data subsets 211 has been merge-sorted), the output or writing phase begins. In one embodiment, each subsequent portion of the merge-sort is done in the background while the results of the previous merge-sort are being output (e.g., written to disk or output to a network). Thus, sorted output data subset 214(1) is output from sorting device 208 while sorted output data subset 214(2) is being generated by sorting device 208, and sorted output data subset 214(2) is output from sorting device 208 while the next sorted output data subset 214 is being generated by sorting device 208, and this process continues until the last sorted output data subset 214(Y) is output by sorting device 208. In one embodiment, the sorted data that is being generated for each current output data subset 214 is stored in a memory cache as it is generated, and is output from the memory cache while the next output data subset 214 is being generated.
In this way, by splitting the data into X shards or subsets 210, the only CPU time that is not overlapped with I/O is the time involved in subset-sorting 1/Xth of the data, followed by the time to merge-sort 1/Xth of the data. This makes virtually all of the CPU time for sorting disappear into the I/O time, even in systems where the I/O time is not much more than the compute time. For example, for subsets 210 that are each 1/100th of the total size of the input dataset 202, the only CPU time that is not overlapped with an I/O operation is the time for subset-sorting 1/100th of the total data plus the time to merge-sort 1/100th of the data.
In one embodiment, the sorted output data subsets in method 300 each have a same size as the individual input data subsets. The outputting each of the sorted output data subsets in method 300 according to one embodiment comprises outputting each of the sorted output data subsets to a storage medium. In another embodiment, the outputting each of the sorted output data subsets comprises outputting each of the sorted output data subsets to a network file system. In one embodiment, a size of the individual input data subsets in method 300 is varied based on a size of the dataset. The individual input data subsets according to one embodiment each have a size that is a predetermined fraction of a size of the dataset. In one embodiment of method 300, the dataset is stored as a plurality of portions on a plurality of computing devices, and the data from the dataset is incrementally received from the plurality of computing devices. The individually sorting each filled data subset of unsorted data in method 300 according to one embodiment is performed using a quick-sort algorithm. In one embodiment, the data incrementally received from the dataset is received from a storage medium, and in another embodiment the data is received from a network file system.
Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described without departing from the scope of the present invention. This application is intended to cover any adaptations or variations of the specific embodiments discussed herein. Therefore, it is intended that this invention be limited only by the claims and the equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
4491945 | Turner | Jan 1985 | A |
4780870 | McHarg et al. | Oct 1988 | A |
5142687 | Lary | Aug 1992 | A |
5305320 | Andrews et al. | Apr 1994 | A |
5408649 | Beshears et al. | Apr 1995 | A |
5423046 | Nunnelley et al. | Jun 1995 | A |
5553285 | Krakauer et al. | Sep 1996 | A |
5621884 | Beshears et al. | Apr 1997 | A |
5663951 | Danneels et al. | Sep 1997 | A |
5787274 | Agrawal et al. | Jul 1998 | A |
5845113 | Swami et al. | Dec 1998 | A |
5857186 | Narita et al. | Jan 1999 | A |
5914878 | Yamamoto et al. | Jun 1999 | A |
5938732 | Lim et al. | Aug 1999 | A |
5990810 | Williams | Nov 1999 | A |
5990904 | Griffin | Nov 1999 | A |
6021407 | Meck et al. | Feb 2000 | A |
6035296 | Fushimi | Mar 2000 | A |
6424979 | Livingston et al. | Jul 2002 | B1 |
6577613 | Ramanathan | Jun 2003 | B1 |
6850489 | Omi et al. | Feb 2005 | B1 |
6871295 | Ulrich et al. | Mar 2005 | B2 |
6963996 | Coughlin | Nov 2005 | B2 |
7076555 | Orman et al. | Jul 2006 | B1 |
7113993 | Cappiello et al. | Sep 2006 | B1 |
7115919 | Kodama | Oct 2006 | B2 |
7139933 | Hsu et al. | Nov 2006 | B2 |
7171491 | O'Toole et al. | Jan 2007 | B1 |
7180875 | Neumiller et al. | Feb 2007 | B1 |
7184958 | Kagoshima et al. | Feb 2007 | B2 |
7231475 | Singla et al. | Jun 2007 | B1 |
7240358 | Horn et al. | Jul 2007 | B2 |
7342876 | Bellur et al. | Mar 2008 | B2 |
7383288 | Miloushev et al. | Jun 2008 | B2 |
7433332 | Golden et al. | Oct 2008 | B2 |
7437407 | Vahalia et al. | Oct 2008 | B2 |
7454420 | Ray et al. | Nov 2008 | B2 |
7577817 | Karpoff et al. | Aug 2009 | B2 |
7610348 | Kisley et al. | Oct 2009 | B2 |
7657581 | Orenstein et al. | Feb 2010 | B2 |
7725437 | Kirshenbaum et al. | May 2010 | B2 |
7756826 | Bots et al. | Jul 2010 | B2 |
7769843 | Neuse et al. | Aug 2010 | B2 |
7774469 | Massa et al. | Aug 2010 | B2 |
7801994 | Kudo | Sep 2010 | B2 |
7805580 | Hirzel et al. | Sep 2010 | B2 |
8010829 | Chatterjee et al. | Aug 2011 | B1 |
8074107 | Sivasubramanian et al. | Dec 2011 | B2 |
8160063 | Maltz et al. | Apr 2012 | B2 |
8181061 | Nightingale et al. | May 2012 | B2 |
8234518 | Hansen | Jul 2012 | B2 |
8261033 | Slik et al. | Sep 2012 | B1 |
20020152293 | Hahn et al. | Oct 2002 | A1 |
20020194245 | Simpson et al. | Dec 2002 | A1 |
20040153479 | Mikesell et al. | Aug 2004 | A1 |
20050075911 | Craven, Jr. | Apr 2005 | A1 |
20050078655 | Tiller et al. | Apr 2005 | A1 |
20050094640 | Howe | May 2005 | A1 |
20050262097 | Sim-Tang et al. | Nov 2005 | A1 |
20060004759 | Borthakur et al. | Jan 2006 | A1 |
20060015495 | Keating et al. | Jan 2006 | A1 |
20060074946 | Pham | Apr 2006 | A1 |
20060098572 | Zhang et al. | May 2006 | A1 |
20060129614 | Kim et al. | Jun 2006 | A1 |
20060280168 | Ozaki | Dec 2006 | A1 |
20070025381 | Feng et al. | Feb 2007 | A1 |
20070156842 | Vermeulen et al. | Jul 2007 | A1 |
20080005275 | Overton et al. | Jan 2008 | A1 |
20080010400 | Moon | Jan 2008 | A1 |
20080098392 | Wipfel et al. | Apr 2008 | A1 |
20080114827 | Gerber et al. | May 2008 | A1 |
20090006888 | Bernhard et al. | Jan 2009 | A1 |
20090106269 | Zuckerman et al. | Apr 2009 | A1 |
20090112921 | Oliveira et al. | Apr 2009 | A1 |
20090113323 | Zhao et al. | Apr 2009 | A1 |
20090183002 | Rohrer et al. | Jul 2009 | A1 |
20090204405 | Kato et al. | Aug 2009 | A1 |
20090259665 | Howe et al. | Oct 2009 | A1 |
20090265218 | Amini et al. | Oct 2009 | A1 |
20090268611 | Persson et al. | Oct 2009 | A1 |
20090271412 | Lacapra et al. | Oct 2009 | A1 |
20090300407 | Kamath et al. | Dec 2009 | A1 |
20090307329 | Olston et al. | Dec 2009 | A1 |
20090307334 | Maltz et al. | Dec 2009 | A1 |
20100008230 | Khandekar et al. | Jan 2010 | A1 |
20100008347 | Qin et al. | Jan 2010 | A1 |
20100094955 | Zuckerman et al. | Apr 2010 | A1 |
20100094956 | Zuckerman et al. | Apr 2010 | A1 |
20100161657 | Cha et al. | Jun 2010 | A1 |
20100191919 | Bernstein et al. | Jul 2010 | A1 |
20100198888 | Blomstedt et al. | Aug 2010 | A1 |
20100198972 | Umbehocker | Aug 2010 | A1 |
20100250746 | Murase | Sep 2010 | A1 |
20100306408 | Greenberg et al. | Dec 2010 | A1 |
20100332818 | Prahlad et al. | Dec 2010 | A1 |
20110022574 | Hansen | Jan 2011 | A1 |
20110153835 | Rimae et al. | Jun 2011 | A1 |
20110246471 | Rakib | Oct 2011 | A1 |
20110246735 | Bryant et al. | Oct 2011 | A1 |
20110258290 | Nightingale et al. | Oct 2011 | A1 |
20110258297 | Nightingale et al. | Oct 2011 | A1 |
20110258482 | Nightingale et al. | Oct 2011 | A1 |
20110258488 | Nightingale et al. | Oct 2011 | A1 |
20110296025 | Lieblich et al. | Dec 2011 | A1 |
20110307886 | Thanga et al. | Dec 2011 | A1 |
20120041976 | Annapragada | Feb 2012 | A1 |
20120042162 | Anglin et al. | Feb 2012 | A1 |
20120047239 | Donahue et al. | Feb 2012 | A1 |
20120054556 | Grube et al. | Mar 2012 | A1 |
20120197958 | Nightingale et al. | Aug 2012 | A1 |
Number | Date | Country |
---|---|---|
2010108368 | Sep 2010 | WO |
2010108368 | Sep 2010 | WO |
Entry |
---|
Nyberg, et al., “AlphaSort: A Cache-Sensitive Parallel External Sort”, Retrieved at <<http://staff.ustc.edu.cn/˜jpq/paper/flash/1995-VLDB-AlphaSort-%20A%20Cache-Sensitive%20Parallel%20External%20Sort.pdf>>, The VLDB Journal—The International Journal on Very Large Data Bases, vol. 4 No. 4, Mar. 28, 1995, pp. 603-627. |
Mohamed et al, “Extensible Communication Architecture for Grid Nodes,” abstract retrieved on Apr. 23, 2010 at <<http://www.computer.org/portal/web/csdl/doi/10.1109/itcc.2004.1286587>>, International Conference on Information Technology: Coding and Computing (ITCC'04), vol. 2, Apr. 5-7, 2004, Las Vegas, NV, 1 pg. |
“EMC RecoverPoint Family: Cost-effective local and remote data protection and disaster recovery solution”, retrieved on Mar. 9, 2010 at <<http://www.emc.com/collateral/software/data-sheet/h2769-emc-recoverpoint-family.pdf>>, EMC Corporation, Data Sheet H2769.8, 2010, 3 pgs. |
Akturk, “Asynchronous Replication of Metadata Across Multi-Master Servers in Distributed Data Storage Systems”, A Thesis Submitted to Louisiana State University and Agricultural and Mechanical College, Dec. 2009, 70 pages. |
Bafna et al, “Chirayu: A Highly Available Metadata Server for Object Based Storage Cluster File System,” retrieved from <<http://abhinaykampasi.tripod.com/TechDocs/ChirayuPaper.pdf>>, IEEE Bombay Section,Year 2003 Prof K Shankar Student Paper & Project Contest, Apr. 2003, 6 pgs. |
Buddhikot et al, “Design of a Large Scale Multimedia Storage Server,” Journal Computer Networks and ISDN Systems, vol. 27, Issue 3, Dec. 1994, pp. 1-18. |
Chen et al, “Replication-Based Highly Available Metadata Management for Cluster File Systems,” 2010 IEEE International Conference on Cluster Computing, Sep. 2010, pp. 292-301. |
Fan et al, “A Failure Recovery Mechanism for Distributed Metadata Servers in DCFS2,” Seventh International Conference on High Performance Computing and Grid in Asia Pacific Region, Jul. 20-22, 2004, 7 pgs. |
Fu, et al., “A Novel Dynamic Metadata Management Scheme for Large Distributed Storage Systems”, Proceedings of the 2008 10th IEEE International Conference on High Performance Computing and Communications, Sep. 2008, pp. 987-992. |
Fullmer et al, “Solutions to Hidden Terminal Problems in Wireless Networks,” Proceedings of the ACM SIGCOMM '97 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication, Cannes, France, Oct. 1997, pp. 39-49. |
Lang, “Parallel Virtual File System, Version 2”, retrieved on Nov. 12, 2010 from <<http://www.pvfs.org/cvs/pvfs-2-7-branch.build/doc/pvfs2-guide/pvfs2-guide.php>>, Sep. 2003, 39 pages. |
Sinnamohideen et al, “A Transparently-Scalable Metadata Service for the Ursa Minor Storage System,” USENIXATC' 10 Proceedings of the 2010 USENIX Conference, Jun. 2010, 14 pgs. |
Weil et al, “CRUSH: Controlled, Scalable, Decentralized Placement of Replicated Data,” Proceedings of SC '06, Nov. 2006, 12 pgs. |
Weiser, “Some Computer Science Issues in Ubiquitous Computing,” retrieved at <<https://www.cs.ucsb.edu/˜ravenben/papers/coreos/Wei93.pdf>>, Mar. 1993, 14 pgs. |
Non-Final Office Action for U.S. Appl. No. 12/763,133, mailed on Sep. 16, 2011, Edmund Nightingale, “Memory Management and Recovery for Datacenters”, 19 pages. |
“Citrix Storage Delivery Services Adapter for NetApp Data ONTAP”, retrieved on Mar. 9, 2010 at <<http://citrix.com/site/resources/dynamic/partnerDocs/datasheet—adapter.pd>>, Citrix Systems, Citrix Storage Delivery Services Data sheet, 2008, 2 pgs. |
Office Action for U.S. Appl. No. 12/766,726, mailed on May 29, 2012, Nightingale et al., “Bandwidth-Proportioned Datacenters”, 21 pages. |
Office Action for U.S. Appl. No. 13/412,944, mailed on Oct. 11, 2012, Nightingale et al., “Reading and Writing During Cluster Growth Phase”, 10 pages. |
Office Action for U.S. Appl. No. 13/112,978, mailed on Dec. 14, 2012, Elson et al., “Data Layout for Recovery and Durability”, 14 pages. |
Office Action for U.S. Appl. No. 13/017,193, mailed on Dec. 3, 2012, Nightingale et al., “Parallel Serialization of Request Processing”, 19 pages. |
Office Action for U.S. Appl. No. 13/116,270, mailed Feb. 15, 2013, Nightingale et al., “Server Failure Recovery”, 16 pages. |
Office Action for U.S. Appl. No. 13/017,193, mailed on Jun. 3, 2013, Nightingale et al., “Parallel Serialization of Request Processing”, 21 pages. |
Office Action for U.S. Appl. No. 13/112,978, mailed on Jul. 17, 2013, Elson et al., “Data Layout for Recovery and Durability”, 16 pages. |
Office Action for U.S. Appl. No. 13/412,944, mailed on Oct. 11, 2012, Nightingale et al., Reading and Writing During Cluster Growth Phase, 10 pages. |
Office Action for U.S. Appl. No. 12/763,107, mailed on Jul. 20, 2012, Nightingale et al., “Locator Table and Client Library for Datacenters”, 11 pages. |
Isard et al., “Dryad: Distributed Data-Parallel Programs from Sequential Building Blocks”, In Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems, Mar. 21, 2007, 14 pages. |
Kennedy, “Is Parallel Computing Dead”, retrieved on Dec. 5, 2013, at http//www.crpc.rice.edu.newsletters/oct94/director.htm., Parallel Computing Newsletter, 2(4): Oct. 1994 2 pages. |
Rhea et al., “Maintenance-Free Global Data Storage”, IEEE Internet Computing, Sep.-Oct. 2001, pp. 40-49. |
PCT Search Report mailed Oct. 23, 2012 for PCT Application No. PCT/US2012/035700, 7 pages. |
Number | Date | Country | |
---|---|---|---|
20120330979 A1 | Dec 2012 | US |