This invention relates generally to databases, and more particularly to systems and methods for managing datasets in databases.
With the large amounts of data generated in recent years, data mining and machine learning are playing an increasingly important role in today's computing environment. For example, businesses may utilize either data mining or machine learning to predict the behavior of users. This predicted behavior may then be used by businesses to determine which plan to proceed with, or how to grow the business.
The data used in data mining and analytics is typically not stored in a uniform data storage system. Many data storage systems utilize different file systems, and those different file systems are typically not compatible with each other. Further, the data may reside in geographically diverse locations.
One conventional method to performing data analytics across different databases includes copying data from one database to a central database, and performing the data analytics on the central database. However, this results in an inefficient use of storage space, and creates issues with data consistency between the two databases.
There is a need, therefore, for an improved method, article of manufacture, and apparatus for managing data.
The present invention will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:
A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. While the invention is described in conjunction with such embodiment(s), it should be understood that the invention is not limited to any one embodiment. On the contrary, the scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications, and equivalents. For the purpose of example, numerous specific details are set forth in the following description in order to provide a thorough understanding of the present invention. These details are provided for the purpose of example, and the present invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the present invention is not unnecessarily obscured.
It should be appreciated that the present invention can be implemented in numerous ways, including as a process, an apparatus, a system, a device, a method, or a computer readable medium such as a computer readable storage medium or a computer network wherein computer program instructions are sent over optical or electronic communication links. Applications may take the form of software executing on a general purpose computer or be hardwired or hard coded in hardware. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention.
An embodiment of the invention will be described with reference to a data storage system in the form of a storage system configured to store files, but it should be understood that the principles of the invention are not limited to this configuration. Rather, they are applicable to any system capable of storing and handling various types of objects, in analog, digital, or other form. Although terms such as document, file, object, etc. may be used by way of example, the principles of the invention are not limited to any particular form of representing and storing data or other information; rather, they are equally applicable to any object capable of representing information.
The meta store includes information about the different files systems in Storage Nodes 112, such as API information of the file system's interface, and different attributes and metadata of the file system. The meta store also includes information on the binary location of the Storage Abstraction Layer 110. As new file systems are added to the database system, the new file systems are registered (e.g. provided API information, other attributes, etc.) with the meta store. Once the new file systems are added, instances of that file system may be created to store data objects, such as databases and tables.
The storage nodes may be different file systems. For example, one storage node may be Hadoop File System (HDFS), while another storage node may be NFS. Having multiple file systems presents some challenges. One challenge is that file systems do not support all the same commands. The Storage Abstraction Layer helps address this challenge.
In some embodiments, the Storage Abstraction Layer selects a file system instance. A file system instance means a physical storage system for a specific file system. As discussed above, there may be several different file systems, and several different instances. The instances may be of the same file system, or they may be of different file systems. Different file systems may have different semantics or different performance characteristics. For example, some file systems allow you to update data, while other file systems only let you append data. The Storage Abstraction Layer chooses a file system based on the file system's attributes.
For example, in some embodiments, if a user wanted to modify or update a file that is stored on an underlying storage system which does not support file modification, the Storage Abstraction Layer may recognize the update command and move the file from the underlying storage system to another which does support file modification. The move may be temporary (e.g. move the file back after the user is finished with the file), or the move may be permanent.
In some embodiments, the Storage Abstraction Layer may choose to store a data object in a file system that does not allow updating. This may be preferable in cases where the data object is only read and never modified, and the file system is efficient for retrieving data. Thus, the Storage Abstraction Layer may take into account the usage statistics of the data object to determine what file system to use to store the data.
In some embodiments, the Storage Abstraction Layer may perform semantic adaptation. This may be preferable when the underlying file system may not be able to communicate directly with segments. This may occur when the interface the Storage Abstraction Layer exposes to the segment execution engine does not match with the semantics of the underlying file system. Other examples include instances where the functionality required by the segments is not supported by the underlying file system.
For example, a user may wish to truncate a file. However, the file may be stored on a segment where the underlying storage does not allow truncating files. The user is not aware of this because the user is not aware of where the files are physically stored. Typically, without a Storage Abstraction Layer, the underlying file system would not be able to understand the truncate command.
One example of semantic adaption includes adapting the truncate command. Suppose that a segment requires a piece of data to be truncated. However, the underlying file system does not support the truncate functionality. The Storage Abstraction Layer may be able to put various commands together to mimic a truncate command. Since the Storage Abstraction Layer has access to the metadata of the file system stored in the meta store, it knows what commands are allowed in the file system, as well as how to access the file system via APIs. Suppose that the file to be truncated is File A, and File A consists of 20 bytes. The segment wants the last 10 bytes to be deleted. With this requirement, the Storage Abstraction Layer may employ semantic adaptation to complete the truncation even when the underlying file system does not support a truncate command. In some embodiments, the Storage Abstraction Layer may first copy the first 10 bytes of File A to a temporary file, called File B. Then, the original File A is deleted, leaving only the temporary File B. After the original File A is deleted, the temporary File B is renamed to File A. File A is now only half of the original File A. In other words, File A has been truncated, even though the underlying file system did not support truncation. The Storage Abstraction Layer, by understanding how to access the underlying file system via the meta store, sent a series of commands to mimic a truncate. This series of commands may be stored in the meta store so that future truncate requests may make use of it.
Another example of semantic adaption includes a file update command. As mentioned above, some file systems do not allow for updating a file. Suppose a segment required that a file be updated. However, the file is stored in a file system that does not allow files to be updated. In some embodiments, the Storage Abstraction Layer may record the modifications in separate file as a new version. For example, if File A was to be modified, the separate file may be called File A_ver2. The segment (or user) will see that changes are being made to File A, but in fact, File A remains unchanged and the changes are being stored in File A_ver2. After the segment is finished modifying or updating the file, there may be two files stored—one is File A, and the other is File A_ver2. When subsequent users want to access File A, the Storage Abstraction Layer may cause the two files to be merged. With File A merged with File A_ver2 and called File A, the new File A will include all the changes made by the previous user. In other words, File A has been modified, even though the underlying file system did not support updating.
With the Storage Abstraction Layer, many different file systems may be supported. New and different storage systems with different file systems may be “plugged” into the database, without affecting the ability for the database to run its queries or jobs, as long as the meta store is updated with information about the new file system, such as its APIs.
For the sake of clarity, the processes and methods herein have been illustrated with a specific flow, but it should be understood that other sequences may be possible and that some may be performed in parallel, without departing from the spirit of the invention. Further, though the techniques herein teach creating one SwR sample in parallel, those with ordinary skill in the art will readily appreciate that the techniques are easily extendable to generate many SwR samples. Additionally, steps may be subdivided or combined. As disclosed herein, software written in accordance with the present invention may be stored in some form of computer-readable medium, such as memory or CD-ROM, or transmitted over a network, and executed by a processor.
All references cited herein are intended to be incorporated by reference. Although the present invention has been described above in terms of specific embodiments, it is anticipated that alterations and modifications to this invention will no doubt become apparent to those skilled in the art and may be practiced within the scope and equivalents of the appended claims. More than one computer may be used, such as by using multiple computers in a parallel or load-sharing arrangement or distributing tasks across multiple computers such that, as a whole, they perform the functions of the components identified herein; i.e. they take the place of a single computer. Various functions described above may be performed by a single process or groups of processes, on a single computer or distributed over several computers. Processes may invoke other processes to handle certain tasks. A single storage device may be used, or several may be used to take the place of a single storage device. The disclosed embodiments are illustrative and not restrictive, and the invention is not to be limited to the details given herein. There are many alternative ways of implementing the invention. It is therefore intended that the disclosure and following claims be interpreted as covering all such alterations and modifications as fall within the true spirit and scope of the invention.
This application is a continuation of co-pending U.S. patent application Ser. No. 15/150,263, entitled PLUGGABLE STORAGE SYSTEM FOR DISTRIBUTED FILE SYSTEMS filed May 9, 2016 which is incorporated herein by reference for all purposes, which is a continuation of U.S. patent application Ser. No. 13/843,067, entitled PLUGGABLE STORAGE SYSTEM FOR DISTRIBUTED FILE SYSTEMS filed Mar. 15, 2013, now U.S. Pat. No. 9,454,548, which is incorporated herein by reference for all purposes, which claims priority to U.S. Provisional Application No. 61/769,043, entitled INTEGRATION OF MASSIVELY PARALLEL PROCESSING WITH A DATA INTENSIVE SOFTWARE FRAMEWORK filed Feb. 25, 2013 which is incorporated herein by reference for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
5191611 | Lang | Mar 1993 | A |
5495607 | Pisello | Feb 1996 | A |
5655116 | Kirk | Aug 1997 | A |
5706514 | Bonola | Jan 1998 | A |
5922030 | Shank | Jul 1999 | A |
6266682 | Lamarca | Jul 2001 | B1 |
6269380 | Terry | Jul 2001 | B1 |
6564252 | Hickman | May 2003 | B1 |
6718372 | Bober | Apr 2004 | B1 |
6745385 | Lupu | Jun 2004 | B1 |
6907414 | Parnell | Jun 2005 | B1 |
6912482 | Kaiser | Jun 2005 | B2 |
6947925 | Newcombe | Sep 2005 | B2 |
6996582 | Daniels | Feb 2006 | B2 |
7035931 | Zayas | Apr 2006 | B1 |
7069421 | Yates, Jr. | Jun 2006 | B1 |
7177823 | Lam | Feb 2007 | B2 |
7194513 | Sharif | Mar 2007 | B2 |
7254636 | O'Toole, Jr. | Aug 2007 | B1 |
7313512 | Traut | Dec 2007 | B1 |
7346751 | Prahlad | Mar 2008 | B2 |
7415038 | Ullmann | Aug 2008 | B2 |
7475199 | Bobbitt | Jan 2009 | B1 |
7493311 | Cutsinger | Feb 2009 | B1 |
7574443 | Bahar | Aug 2009 | B2 |
7593938 | Lemar | Sep 2009 | B2 |
7613947 | Coatney | Nov 2009 | B1 |
7653699 | Colgrove | Jan 2010 | B1 |
7689535 | Bernard | Mar 2010 | B2 |
7689609 | Lango | Mar 2010 | B2 |
7702625 | Peterson | Apr 2010 | B2 |
7716261 | Black | May 2010 | B2 |
7720841 | Gu | May 2010 | B2 |
7739316 | Thompson | Jun 2010 | B2 |
7761678 | Bodmer | Jul 2010 | B1 |
7774335 | Scofield | Aug 2010 | B1 |
7827201 | Gordon | Nov 2010 | B1 |
7949693 | Mason | May 2011 | B1 |
7958303 | Shuster | Jun 2011 | B2 |
7978544 | Bernard | Jul 2011 | B2 |
7984043 | Waas | Jul 2011 | B1 |
8010738 | Chilton | Aug 2011 | B1 |
8028290 | Rymarczyk | Sep 2011 | B2 |
8041735 | Lacapra | Oct 2011 | B1 |
8051113 | Shekar | Nov 2011 | B1 |
8131739 | Wu | Mar 2012 | B2 |
8180813 | Goodson | May 2012 | B1 |
8185488 | Chakravarty | May 2012 | B2 |
8195769 | Miloushev | Jun 2012 | B2 |
8200723 | Sears | Jun 2012 | B1 |
8219681 | Glade | Jul 2012 | B1 |
8255430 | Dutton | Aug 2012 | B2 |
8255550 | Becher | Aug 2012 | B1 |
8301822 | Pinto | Oct 2012 | B2 |
8312037 | Bacthavachalu | Nov 2012 | B1 |
8332526 | Kruse | Dec 2012 | B2 |
8352429 | Mamidi | Jan 2013 | B1 |
8392400 | Ransil | Mar 2013 | B1 |
8417681 | Miloushev | Apr 2013 | B1 |
8452821 | Shankar | May 2013 | B2 |
8484259 | Makkar | Jul 2013 | B1 |
8533183 | Hokanson | Sep 2013 | B2 |
8543564 | Conrad | Sep 2013 | B2 |
8543596 | Kostamaa | Sep 2013 | B1 |
8577911 | Stepinski | Nov 2013 | B1 |
8578096 | Malige | Nov 2013 | B2 |
8595237 | Chaudhary | Nov 2013 | B1 |
8682853 | Zane | Mar 2014 | B2 |
8682922 | Boneti | Mar 2014 | B2 |
8700875 | Barron | Apr 2014 | B1 |
8751533 | Dhavale | Jun 2014 | B1 |
8762330 | Kick | Jun 2014 | B1 |
8825752 | Madhavan | Sep 2014 | B1 |
8832154 | Srinivasan | Sep 2014 | B1 |
8856286 | Barsness | Oct 2014 | B2 |
8971916 | Joyce | Mar 2015 | B1 |
9026559 | Bernbo | May 2015 | B2 |
9118697 | Kishore | Aug 2015 | B1 |
9146766 | Shaikh | Sep 2015 | B2 |
9323758 | Stacey | Apr 2016 | B1 |
9449007 | Wood | Sep 2016 | B1 |
9628438 | Hardin | Apr 2017 | B2 |
9674294 | Gonthier | Jun 2017 | B1 |
9684571 | Modukuri | Jun 2017 | B2 |
9727588 | Ostapovicz | Aug 2017 | B1 |
9805053 | Tiwari | Oct 2017 | B1 |
9886217 | Tsuchiya | Feb 2018 | B2 |
9984083 | Tiwari | May 2018 | B1 |
10095800 | Yalamanchi | Oct 2018 | B1 |
20020002638 | Obara | Jan 2002 | A1 |
20020049782 | Herzenberg | Apr 2002 | A1 |
20020133810 | Giles | Sep 2002 | A1 |
20020146035 | Tyndall | Oct 2002 | A1 |
20020156840 | Ulrich | Oct 2002 | A1 |
20030126120 | Faybishenko | Jul 2003 | A1 |
20030172094 | Lauria | Sep 2003 | A1 |
20030191745 | Jiang | Oct 2003 | A1 |
20030229637 | Baxter | Dec 2003 | A1 |
20040054748 | Ackaouy | Mar 2004 | A1 |
20040078467 | Grosner | Apr 2004 | A1 |
20040088282 | Xu | May 2004 | A1 |
20040098415 | Bone | May 2004 | A1 |
20040143571 | Bjornson | Jul 2004 | A1 |
20040205143 | Uemura | Oct 2004 | A1 |
20050071338 | Sugioka | Mar 2005 | A1 |
20050091287 | Sedlar | Apr 2005 | A1 |
20050144254 | Kameda | Jun 2005 | A1 |
20050165777 | Hurst-Hiller | Jul 2005 | A1 |
20050198401 | Chron | Sep 2005 | A1 |
20050216788 | Mani-Meitav | Sep 2005 | A1 |
20060005188 | Vega | Jan 2006 | A1 |
20060010433 | Neil | Jan 2006 | A1 |
20060037069 | Fisher | Feb 2006 | A1 |
20060080465 | Conzola | Apr 2006 | A1 |
20060136653 | Traut | Jun 2006 | A1 |
20060146057 | Blythe | Jul 2006 | A1 |
20060149793 | Kushwah | Jul 2006 | A1 |
20060173751 | Schwarze | Aug 2006 | A1 |
20060212457 | Pearce | Sep 2006 | A1 |
20060248528 | Oney | Nov 2006 | A1 |
20070282951 | Selimis | Dec 2007 | A1 |
20080059746 | Fisher | Mar 2008 | A1 |
20080172281 | Probst | Jul 2008 | A1 |
20080281802 | Peterson | Nov 2008 | A1 |
20080313183 | Cunningham | Dec 2008 | A1 |
20080320151 | McCanne | Dec 2008 | A1 |
20090007105 | Fries | Jan 2009 | A1 |
20090007161 | Sheehan | Jan 2009 | A1 |
20090089344 | Brown | Apr 2009 | A1 |
20090106255 | Lacapra | Apr 2009 | A1 |
20090132609 | Barsness | May 2009 | A1 |
20090210431 | Marinkovic | Aug 2009 | A1 |
20090222569 | Frick | Sep 2009 | A1 |
20090222896 | Ichikawa | Sep 2009 | A1 |
20090254916 | Bose | Oct 2009 | A1 |
20090259665 | Howe | Oct 2009 | A1 |
20090265400 | Pudipeddi | Oct 2009 | A1 |
20090328225 | Chambers | Dec 2009 | A1 |
20100036840 | Pitts | Feb 2010 | A1 |
20100042655 | Tse | Feb 2010 | A1 |
20100145917 | Bone | Jun 2010 | A1 |
20100241673 | Wu | Sep 2010 | A1 |
20100274772 | Samuels | Oct 2010 | A1 |
20100287170 | Liu | Nov 2010 | A1 |
20110113052 | Hornkvist | May 2011 | A1 |
20110137966 | Srinivasan | Jun 2011 | A1 |
20110153662 | Stanfill | Jun 2011 | A1 |
20110153697 | Nickolov | Jun 2011 | A1 |
20110179250 | Matsuzawa | Jul 2011 | A1 |
20110213928 | Grube | Sep 2011 | A1 |
20110238814 | Pitts | Sep 2011 | A1 |
20110302583 | Abadi | Dec 2011 | A1 |
20110313973 | Srivas | Dec 2011 | A1 |
20120023145 | Brannon | Jan 2012 | A1 |
20120036107 | Miloushev | Feb 2012 | A1 |
20120066274 | Stephenson | Mar 2012 | A1 |
20120089470 | Barnes, Jr. | Apr 2012 | A1 |
20120095952 | Archambeau | Apr 2012 | A1 |
20120095992 | Cutting | Apr 2012 | A1 |
20120101991 | Srivas | Apr 2012 | A1 |
20120166483 | Choudhary | Jun 2012 | A1 |
20120185913 | Martinez | Jul 2012 | A1 |
20120278471 | Labowicz | Nov 2012 | A1 |
20120310916 | Abadi | Dec 2012 | A1 |
20120311572 | Falls | Dec 2012 | A1 |
20120317388 | Driever | Dec 2012 | A1 |
20120323844 | Chatley | Dec 2012 | A1 |
20130007091 | Rao | Jan 2013 | A1 |
20130036272 | Nelson | Feb 2013 | A1 |
20130091094 | Nelke | Apr 2013 | A1 |
20130132967 | Soundararajan | May 2013 | A1 |
20130151884 | Hsu | Jun 2013 | A1 |
20130166543 | Macdonald | Jun 2013 | A1 |
20130185735 | Farrell | Jul 2013 | A1 |
20130198716 | Huang | Aug 2013 | A1 |
20130246347 | Sorenson | Sep 2013 | A1 |
20130262443 | Leida | Oct 2013 | A1 |
20130275653 | Ranade | Oct 2013 | A1 |
20130282650 | Zhang | Oct 2013 | A1 |
20140059310 | Du | Feb 2014 | A1 |
20140136483 | Chaudhary | May 2014 | A1 |
20140136779 | Guha | May 2014 | A1 |
20140149392 | Wang | May 2014 | A1 |
20140188825 | Muthukkaruppan | Jul 2014 | A1 |
20140188845 | Ah-Soon | Jul 2014 | A1 |
20140195558 | Murthy | Jul 2014 | A1 |
20140201234 | Lee | Jul 2014 | A1 |
20140337323 | Soep | Nov 2014 | A1 |
20150095308 | Kornacker | Apr 2015 | A1 |
20150120711 | Liensberger | Apr 2015 | A1 |
20160150019 | Klinkner | May 2016 | A1 |
Entry |
---|
Xie et al. “Improving MapReduce Performance through Data Placement in Heterogeneous Hadoop Clusters,” 2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW), Atlanta, GA, 2010, pp. 1-9. Year: 2010. |
Liao et al. Multi-dimensional Index on Hadoop Distributed File System, 2010, Fifth IEEE International Conference on Networking, Architecture, and Storage, pp. 240-249. |
Number | Date | Country | |
---|---|---|---|
20200012646 A1 | Jan 2020 | US |
Number | Date | Country | |
---|---|---|---|
61769043 | Feb 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15150263 | May 2016 | US |
Child | 16573925 | US | |
Parent | 13843067 | Mar 2013 | US |
Child | 15150263 | US |