The present disclosure relates to resource management systems and methods that manage resources related to data processing and data storage.
Many existing data storage and retrieval systems are available today. For example, in a shared-disk system, all data is stored on a shared storage device that is accessible from all of the processing nodes in a data cluster. In this type of system, all data changes are written to the shared storage device to ensure that all processing nodes in the data cluster access a consistent version of the data. As the number of processing nodes increases in a shared-disk system, the shared storage device (and the communication links between the processing nodes and the shared storage device) becomes a bottleneck that slows data read and data write operations. This bottleneck is further aggravated with the addition of more processing nodes. Thus, existing shared-disk systems have limited scalability due to this bottleneck problem.
Another existing data storage and retrieval system is referred to as a “shared-nothing architecture.” In this architecture, data is distributed across multiple processing nodes such that each node stores a subset of the data in the entire database. When a new processing node is added or removed, the shared-nothing architecture must rearrange data across the multiple processing nodes. This rearrangement of data can be time-consuming and disruptive to data read and write operations executed during the data rearrangement. And, the affinity of data to a particular node can create “hot spots” on the data cluster for popular data. Further, since each processing node performs also the storage function, this architecture requires at least one processing node to store data. Thus, the shared-nothing architecture fails to store data if all processing nodes are removed. Additionally, management of data in a shared-nothing architecture is complex due to the distribution of data across many different processing nodes.
The systems and methods described herein provide an improved approach to data storage and data retrieval that alleviates the above-identified limitations of existing systems.
Non-limiting and non-exhaustive embodiments of the present disclosure are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various figures unless otherwise specified.
The systems and methods described herein provide a new platform for storing and retrieving data without the problems faced by existing systems. For example, this new platform supports the addition of new nodes without the need for rearranging data files as required by the shared-nothing architecture. Additionally, nodes can be added to the platform without creating bottlenecks that are common in the shared-disk system. This new platform is always available for data read and data write operations, even when some of the nodes are offline for maintenance or have suffered a failure. The described platform separates the data storage resources from the computing resources so that data can be stored without requiring the use of dedicated computing resources. This is an improvement over the shared-nothing architecture, which fails to store data if all computing resources are removed. Therefore, the new platform continues to store data even though the computing resources are no longer available or are performing other tasks.
In the following description, reference is made to the accompanying drawings that form a part thereof, and in which is shown by way of illustration specific exemplary embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the concepts disclosed herein, and it is to be understood that modifications to the various disclosed embodiments may be made, and other embodiments may be utilized, without departing from the scope of the present disclosure. The following detailed description is, therefore, not to be taken in a limiting sense.
Reference throughout this specification to “one embodiment,” “an embodiment,” “one example” or “an example” means that a particular feature, structure or characteristic described in connection with the embodiment or example is included in at least one embodiment of the present disclosure. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” “one example” or “an example” in various places throughout this specification are not necessarily all referring to the same embodiment or example. In addition, it should be appreciated that the figures provided herewith are for explanation purposes to persons ordinarily skilled in the art and that the drawings are not necessarily drawn to scale.
Embodiments in accordance with the present disclosure may be embodied as an apparatus, method or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware-comprised embodiment, an entirely software-comprised embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, embodiments of the present disclosure may take the form of a computer program product embodied in any tangible medium of expression having computer-usable program code embodied in the medium.
Any combination of one or more computer-usable or computer-readable media may be utilized. For example, a computer-readable medium may include one or more of a portable computer diskette, a hard disk, a random access memory (RAM) device, a read-only memory (ROM) device, an erasable programmable read-only memory (EPROM or Flash memory) device, a portable compact disc read-only memory (CDROM), an optical storage device, and a magnetic storage device. Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages. Such code may be compiled from source code to computer-readable assembly language or machine code suitable for the device or computer on which the code will be executed.
Embodiments may also be implemented in cloud computing environments. In this description and the following claims, “cloud computing” may be defined as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned via virtualization and released with minimal management effort or service provider interaction and then scaled accordingly. A cloud model can be composed of various characteristics (e.g., on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service), service models (e.g., Software as a Service (“SaaS”), Platform as a Service (“PaaS”), and Infrastructure as a Service (“IaaS”)), and deployment models (e.g., private cloud, community cloud, public cloud, and hybrid cloud).
The flow diagrams and block diagrams in the attached figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flow diagrams or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It will also be noted that each block of the block diagrams and/or flow diagrams, and combinations of blocks in the block diagrams and/or flow diagrams, may be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flow diagram and/or block diagram block or blocks.
The systems and methods described herein provide a flexible and scalable data warehouse using a new data processing platform. In some embodiments, the described systems and methods leverage a cloud infrastructure that supports cloud-based storage resources, computing resources, and the like. Example cloud-based storage resources offer significant storage capacity available on-demand at a low cost. Further, these cloud-based storage resources may be fault-tolerant and highly scalable, which can be costly to achieve in private data storage systems. Example cloud-based computing resources are available on-demand and may be priced based on actual usage levels of the resources. Typically, the cloud infrastructure is dynamically deployed, reconfigured, and decommissioned in a rapid manner.
In the described systems and methods, a data storage system utilizes an SQL (Structured Query Language)-based relational database. However, these systems and methods are applicable to any type of database, and any type of data storage and retrieval platform, using any data storage architecture and using any language to store and retrieve data within the data storage and retrieval platform. The systems and methods described herein further provide a multi-tenant system that supports isolation of computing resources and data between different customers/clients and between different users within the same customer/client.
Resource manager 102 is also coupled to metadata 110, which is associated with the entirety of data stored throughout data processing platform 100. In some embodiments, metadata 110 includes a summary of data stored in remote data storage systems as well as data available from a local cache. Additionally, metadata 110 may include information regarding how data is organized in the remote data storage systems and the local caches. Metadata 110 allows systems and services to determine whether a piece of data needs to be accessed without loading or accessing the actual data from a storage device.
Resource manager 102 is further coupled to an execution platform 112, which provides multiple computing resources that execute various data storage and data retrieval tasks, as discussed in greater detail below. Execution platform 112 is coupled to multiple data storage devices 116, 118, and 120 that are part of a storage platform 114. Although three data storage devices 116, 118, and 120 are shown in
In particular embodiments, the communication links between resource manager 102 and users 104-108, metadata 110, and execution platform 112 are implemented via one or more data communication networks. Similarly, the communication links between execution platform 112 and data storage devices 116-120 in storage platform 114 are implemented via one or more data communication networks. These data communication networks may utilize any communication protocol and any type of communication medium. In some embodiments, the data communication networks are a combination of two or more data communication networks (or sub-networks) coupled to one another. In alternate embodiments, these communication links are implemented using any type of communication medium and any communication protocol.
As shown in
Resource manager 102, metadata 110, execution platform 112, and storage platform 114 are shown in
During typical operation, data processing platform 100 processes multiple queries (or requests) received from any of the users 104-108. These queries are managed by resource manager 102 to determine when and how to execute the queries. For example, resource manager 102 may determine what data is needed to process the query and further determine which nodes within execution platform 112 are best suited to process the query. Some nodes may have already cached the data needed to process the query and, therefore, are good candidates for processing the query. Metadata 110 assists resource manager 102 in determining which nodes in execution platform 112 already cache at least a portion of the data needed to process the query. One or more nodes in execution platform 112 process the query using data cached by the nodes and, if necessary, data retrieved from storage platform 114. It is desirable to retrieve as much data as possible from caches within execution platform 112 because the retrieval speed is typically much faster than retrieving data from storage platform 114.
As shown in
Resource manager 102 also includes an SQL compiler 212, an SQL optimizer 214 and an SQL executor 210. SQL compiler 212 parses SQL queries and generates the execution code for the queries. SQL optimizer 214 determines the best method to execute queries based on the data that needs to be processed. SQL optimizer 214 also handles various data pruning operations and other data optimization techniques to improve the speed and efficiency of executing the SQL query. SQL executor 216 executes the query code for queries received by resource manager 102.
A query scheduler and coordinator 218 sends received queries to the appropriate services or systems for compilation, optimization, and dispatch to execution platform 112. For example, queries may be prioritized and processed in that prioritized order. In some embodiments, query scheduler and coordinator 218 identifies or assigns particular nodes in execution platform 112 to process particular queries. A virtual warehouse manager 220 manages the operation of multiple virtual warehouses implemented in execution platform 112. As discussed below, each virtual warehouse includes multiple execution nodes that each include a cache and a processor.
Additionally, resource manager 102 includes a configuration and metadata manager 222, which manages the information related to the data stored in the remote data storage devices and in the local caches (i.e., the caches in execution platform 112). As discussed in greater detail below, configuration and metadata manager 222 uses the metadata to determine which data files need to be accessed to retrieve data for processing a particular query. A monitor and workload analyzer 224 oversees the processes performed by resource manager 102 and manages the distribution of tasks (e.g., workload) across the virtual warehouses and execution nodes in execution platform 112. Monitor and workload analyzer 224 also redistributes tasks, as needed, based on changing workloads throughout data processing platform 100. Configuration and metadata manager 222 and monitor and workload analyzer 224 are coupled to a data storage device 226. Data storage devices 206 and 226 in
Resource manager 102 also includes a transaction management and access control module 228, which manages the various tasks and other activities associated with the processing of data storage requests and data access requests. For example, transaction management and access control module 228 provides consistent and synchronized access to data by multiple users or systems. Since multiple users/systems may access the same data simultaneously, changes to the data must be synchronized to ensure that each user/system is working with the current version of the data. Transaction management and access control module 228 provides control of various data processing activities at a single, centralized location in resource manager 102. In some embodiments, transaction management and access control module 228 interacts with SQL executor 216 to support the management of various tasks being executed by SQL executor 216.
Although each virtual warehouse 302-306 shown in
Each virtual warehouse 302-306 is capable of accessing any of the data storage devices 116-120 shown in
In the example of
Similar to virtual warehouse 302 discussed above, virtual warehouse 304 includes three execution nodes 326, 328, and 330. Execution node 326 includes a cache 332 and a processor 334. Execution node 328 includes a cache 336 and a processor 338. Execution node 330 includes a cache 340 and a processor 342. Additionally, virtual warehouse 306 includes three execution nodes 344, 346, and 348. Execution node 344 includes a cache 350 and a processor 352. Execution node 346 includes a cache 354 and a processor 356. Execution node 348 includes a cache 358 and a processor 360.
In some embodiments, the execution nodes shown in
Although the execution nodes shown in
Further, the cache resources and computing resources may vary between different execution nodes. For example, one execution node may contain significant computing resources and minimal cache resources, making the execution node useful for tasks that require significant computing resources. Another execution node may contain significant cache resources and minimal computing resources, making this execution node useful for tasks that require caching of large amounts of data. Yet another execution node may contain cache resources providing faster input-output operations, useful for tasks that require fast scanning of large amounts of data. In some embodiments, the cache resources and computing resources associated with a particular execution node are determined when the execution node is created, based on the expected tasks to be performed by the execution node.
Additionally, the cache resources and computing resources associated with a particular execution node may change over time based on changing tasks performed by the execution node. For example, a particular execution node may be assigned more processing resources if the tasks performed by the execution node become more processor intensive. Similarly, an execution node may be assigned more cache resources if the tasks performed by the execution node require a larger cache capacity.
Although virtual warehouses 302-306 are associated with the same execution platform 112, the virtual warehouses may be implemented using multiple computing systems at multiple geographic locations. For example, virtual warehouse 302 can be implemented by a computing system at a first geographic location, while virtual warehouses 304 and 306 are implemented by another computing system at a second geographic location. In some embodiments, these different computing systems are cloud-based computing systems maintained by one or more different entities.
Additionally, each virtual warehouse is shown in
Execution platform 112 is also fault tolerant. For example, if one virtual warehouse fails, that virtual warehouse is quickly replaced with a different virtual warehouse at a different geographic location.
A particular execution platform 112 may include any number of virtual warehouses 302-306. Additionally, the number of virtual warehouses in a particular execution platform is dynamic, such that new virtual warehouses are created when additional processing and/or caching resources are needed. Similarly, existing virtual warehouses may be deleted when the resources associated with the virtual warehouse are no longer necessary.
In some embodiments, virtual warehouses 302, 304, and 306 may operate on the same data in storage platform 114, but each virtual warehouse has its own execution nodes with independent processing and caching resources. This configuration allows requests on different virtual warehouses to be processed independently and with no interference between the requests. This independent processing, combined with the ability to dynamically add and remove virtual warehouses, supports the addition of new processing capacity for new users without impacting the performance observed by the existing users.
Each virtual warehouse 408-412 is configured to communicate with a subset of all databases 414-424. For example, in environment 400, virtual warehouse 408 is configured to communicate with databases 414, 416, and 422. Similarly, virtual warehouse 410 is configured to communicate with databases 416, 418, 420, and 424. And, virtual warehouse 412 is configured to communicate with databases 416, 422, and 424. In alternate embodiments, one or more of virtual warehouses 408-412 communicate with all of the databases 414-424. The arrangement shown in
Although environment 400 shows virtual warehouses 408-412 configured to communicate with specific subsets of databases 414-424, that configuration is dynamic. For example, virtual warehouse 408 may be reconfigured to communicate with a different subset of databases 414-424 based on changing tasks to be performed by virtual warehouse 408. For instance, if virtual warehouse 408 receives requests to access data from database 418, virtual warehouse 408 may be reconfigured to also communicate with database 418. If, at a later time, virtual warehouse 408 no longer needs to access data from database 418, virtual warehouse 408 may be reconfigured to delete the communication with database 418.
Users 502-506 may submit data retrieval and data storage requests to virtual warehouse resource manager 508, which routes the data retrieval and data storage requests to an appropriate virtual warehouse 510-514 in virtual warehouse group 516. In some implementations, virtual warehouse resource manager 508 provides a dynamic assignment of users 502-506 to virtual warehouses 510-514. When submitting a data retrieval or data storage request, users 502-506 may specify virtual warehouse group 516 to process the request without specifying the particular virtual warehouse 510-514 that will process the request. This arrangement allows virtual warehouse resource manager 508 to distribute multiple requests across the virtual warehouses 510-514 based on efficiency, available resources, and the availability of cached data within the virtual warehouses 510-514. When determining how to route data processing requests, virtual warehouse resource manager 508 considers available resources, current resource loads, number of current users, and the like.
In some embodiments, fault tolerance systems create a new virtual warehouses in response to a failure of a virtual warehouse. The new virtual warehouse may be in the same virtual warehouse group or may be created in a different virtual warehouse group at a different geographic location.
Each virtual warehouse 510-514 is configured to communicate with a subset of all databases 518-528. For example, in environment 500, virtual warehouse 510 is configured to communicate with databases 518, 520, and 526. Similarly, virtual warehouse 512 is configured to communicate with databases 520, 522, 524, and 528. And, virtual warehouse 514 is configured to communicate with databases 520, 526, and 528. In alternate embodiments, virtual warehouses 510-514 may communicate with any (or all) of the databases 518-528.
Although environment 500 shows one virtual warehouse group 516, alternate embodiments may include any number of virtual warehouse groups, each associated with any number of virtual warehouses. For example, different virtual warehouses may be created for each customer or group of users. Additionally, different virtual warehouses may be created for different entities, or any other group accessing different data sets. Multiple virtual warehouse groups may have different sizes and configurations. The number of virtual warehouse groups in a particular environment is dynamic and may change based on the changing needs of the users and other systems in the environment.
Virtual warehouse groups 604 and 606 as well as virtual warehouse 612 communicate with databases 620, 622, and 624 through a data communication network 618. In some embodiments data communication networks 602 and 618 are the same network. Environment 600 allows resource manager 102 to coordinate user data storage and retrieval requests across the multiple virtual warehouses 608-616 to store and retrieve data in databases 620-624. Virtual warehouse groups 604 and 606 can be located in the same geographic area, or can be separated geographically. Additionally, virtual warehouse groups 604 and 606 can be implemented by the same entity or by different entities.
The systems and methods described herein allow data to be stored and accessed as a service that is separate from computing (or processing) resources. Even if no computing resources have been allocated from the execution platform, data is available to a virtual warehouse without requiring reloading of the data from a remote data source. Thus, data is available independently of the allocation of computing resources associated with the data. The described systems and methods are useful with any type of data. In particular embodiments, data is stored in a structured, optimized format. The decoupling of the data storage/access service from the computing services also simplifies the sharing of data among different users and groups. As discussed herein, each virtual warehouse can access any data to which it has access permissions, even at the same time as other virtual warehouses are accessing the same data. This architecture supports running queries without any actual data stored in the local cache. The systems and methods described herein are capable of transparent dynamic data movement, which moves data from a remote storage device to a local cache, as needed, in a manner that is transparent to the user of the system. Further, this architecture supports data sharing without prior data movement since any virtual warehouse can access any data due to the decoupling of the data storage service from the computing service.
Method 700 continues as the resource manager determines multiple tasks necessary to process the received statement at 706. The multiple tasks may include, for example, accessing data from a cache in an execution node, retrieving data from a remote storage device, updating data in a cache, storing data in a remote storage device, and the like. The resource manager also distributes the multiple tasks to execution nodes in the execution platform at 708. As discussed herein, the execution nodes in the execution platform are implemented within virtual warehouses. Each execution node performs an assigned task and returns a task result to the resource manager at 710. In some embodiments, the execution nodes return the task results to the query coordinator. The resource manager receives the multiple task results and creates a statement result at 712, and communicates the statement result to the user at 714. In some embodiments, the query coordinator is deleted after the statement result is communicated to the user.
Method 800 continues as the resource manager determines current and future resource needs at 806. For example, the resource manager can identify pending data processing requests as well as expected requests in the near future. The expected requests may be determined based on previous patterns of previously received data processing requests from particular users at particular times. Additionally, the resource manager may receive advance notice of a data processing project and can determine the resources needed to handle that project. The resource manager then determines at 808 whether one or more additional virtual warehouses are needed based on the current data processing requests, the current resource utilization, query response rates, and other performance metrics associated with the existing virtual warehouses. If an additional virtual warehouse is needed, the resource manager provisions a new virtual warehouse at 810. For example, if the resource manager is aware of an upcoming data processing project that will require more resources than are currently available, the resource manager can decide to provision one or more new virtual warehouses to handle the upcoming data processing project. The new virtual warehouse is provisioned quickly such that the new virtual warehouse is ready to handle the data processing requests immediately at the start time of the project.
In addition to adding new virtual warehouses, method 800 may determine whether to deactivate one or more virtual warehouses at 812. If any virtual warehouses are no longer necessary, the resource manager deactivates one or more virtual warehouses at 814. For example, if a particular virtual warehouse was created for a specific data processing project, that virtual warehouse may be deactivated after the specific project is completed. Additionally, if many virtual warehouses are operating with low resource utilization, some of the existing virtual warehouses can be deactivated without degrading the performance of the remaining virtual warehouses. In some embodiments, a virtual warehouse is created for a specific time period. After the time period has elapsed, the virtual warehouse may be deactivated. In other embodiments, the resource manager identifies virtual warehouses that have been idle for a particular amount of time and automatically deactivates those virtual warehouses.
In some situations, particular users (or system administrators) may desire increased performance (e.g., increased query response time). In these situations, additional virtual warehouses may be added to support this increased performance. In other implementations, the resource manager predicts upcoming resource needs based on scheduled (but not yet executed) queries. If the scheduled queries will significantly degrade the system's performance, the resource manager can add more resources prior to executing those queries, thereby maintaining overall system performance. After those queries are executed, the added resources can be deactivated by the resource manager.
In some embodiments, the resource manager predicts a time required to execute a particular query (or group of queries). Based on current query processing performance (e.g., query processing delays, system utilization, etc.), the resource manager determines whether additional resources are needed for that particular query or group of queries. For example, if the current query processing delay exceeds a threshold value, the resource manager may create one or more new execution nodes to provide additional resources for processing the particular query or group of queries. After processing of the query or group of queries is complete, the resource manager may deactivate the new execution node(s) if they are no longer needed for processing other queries.
In some embodiments, a particular user may require certain performance levels when processing the user's queries. For example, the user may require a query response within a particular time period, such as 5 seconds. In these embodiments, the resource manager may allocate additional resources prior to executing the user's queries to ensure the user's performance levels are achieved.
The resource manager also determines whether some of the allocated data capacity is no longer needed at 910. If the resource manager determines that some of the allocated data capacity is no longer needed, the resource manager releases some of the data capacity at 912. For example, the released data capacity is returned to a pool of available data capacity that becomes available for use by other systems or services. If, at a later time, additional data capacity is needed, the resource manager can access data resources from the available pool.
Method 900 continues at 914 as the resource manager determines whether additional processing resources are needed based on the data processing requests and the current utilization of the multiple processors. The resource manager allocates additional processing resources to support the multiple users at 916 if it determines that additional processing resources are needed. The additional processing resources may be accessible to any number of the multiple users, as directed by the resource manager. For example, if a particular user has submitted a large number of data queries, a portion of the additional processing resources may be assigned to that user to assist in processing the data queries. In some embodiments, the addition of more processing resources is referred to as “vertical scaling.”
The resource manager also determines whether some of the currently allocated processing resources are no longer needed at 918. The resource manager releases some of the processing resources that are no longer needed at 920 if it determines that some of the currently allocated processing resources are no longer needed. In some embodiments, the released processing resources are returned to a pool of available processing resources that are available for use by other systems or services. If additional processing resources are needed at a later time, the resource manager can access processing resources from the available pool.
As described herein, data processing platform 100 supports the dynamic activation and deactivation of various resources, such as data storage capacity, processing resources, cache resources, and the like. The single data processing platform 100 can be dynamically changed on-demand based on the current data storage and processing requirements of the pending and anticipated data processing requests. As the data storage and processing requirements change, data processing platform 100 automatically adjusts to maintain a substantially uniform level of data processing performance.
Additionally, the described data processing platform 100 permits changes to the data storage capacity and the processing resources independently. For example, the data storage capacity can be modified without making any changes to the existing processing resources. Similarly, the processing resources can be modified without making any changes to the existing data storage capacity.
Based on the number of current users and the activity level of the current users, the resource manager determines data storage resources and processing resources needed to support the activity level of the current users at 1008. Since the number of users is changing regularly, and the user activity levels may change frequently, the resource manager continuously determines whether the current resources adequately support the current users. If additional resources are needed at 1010, the resource manager provisions one or more new virtual warehouses at 1012 to support the current users. Similarly, if the number of users and/or the activity level decreases, the resource manager may deactivate one or more virtual warehouses if they are no longer needed to support the current users.
In some implementations, the same file is cached by multiple execution nodes at the same time. This multiple caching of files helps with load balancing (e.g., balancing data processing tasks) across multiple execution nodes. Additionally, caching a file in multiple execution nodes helps avoid potential bottlenecks when significant amounts of data are trying to pass through the same communication link. This implementation also supports the parallel processing of the same data by different execution nodes.
The systems and methods described herein take advantage of the benefits of both shared-disk systems and the shared-nothing architecture. The described platform for storing and retrieving data is scalable like the shared-nothing architecture once data is cached locally. It also has all the benefits of a shared-disk architecture where processing nodes can be added and removed without any constraints (e.g., for 0 to N) and without requiring any explicit reshuffling of data.
Computing device 1100 includes one or more processor(s) 1102, one or more memory device(s) 1104, one or more interface(s) 1106, one or more mass storage device(s) 1108, and one or more Input/Output (I/O) device(s) 1110, all of which are coupled to a bus 1112. Processor(s) 1102 include one or more processors or controllers that execute instructions stored in memory device(s) 1104 and/or mass storage device(s) 1108. Processor(s) 1102 may also include various types of computer-readable media, such as cache memory.
Memory device(s) 1104 include various computer-readable media, such as volatile memory (e.g., random access memory (RAM)) and/or nonvolatile memory (e.g., read-only memory (ROM)). Memory device(s) 1104 may also include rewritable ROM, such as Flash memory.
Mass storage device(s) 1108 include various computer readable media, such as magnetic tapes, magnetic disks, optical disks, solid state memory (e.g., Flash memory), and so forth. Various drives may also be included in mass storage device(s) 1108 to enable reading from and/or writing to the various computer readable media. Mass storage device(s) 1108 include removable media and/or non-removable media.
I/O device(s) 1110 include various devices that allow data and/or other information to be input to or retrieved from computing device 1100. Example I/O device(s) 1110 include cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, lenses, CCDs or other image capture devices, and the like.
Interface(s) 1106 include various interfaces that allow computing device 1100 to interact with other systems, devices, or computing environments. Example interface(s) 1106 include any number of different network interfaces, such as interfaces to local area networks (LANs), wide area networks (WANs), wireless networks, and the Internet.
Bus 1112 allows processor(s) 1102, memory device(s) 1104, interface(s) 1106, mass storage device(s) 1108, and I/O device(s) 1110 to communicate with one another, as well as other devices or components coupled to bus 1112. Bus 1112 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, USB bus, and so forth.
For purposes of illustration, programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 1100, and are executed by processor(s) 1102. Alternatively, the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein.
Although the present disclosure is described in terms of certain preferred embodiments, other embodiments will be apparent to those of ordinary skill in the art, given the benefit of this disclosure, including embodiments that do not provide all of the benefits and features set forth herein, which are also within the scope of this disclosure. It is to be understood that other embodiments may be utilized, without departing from the scope of the present disclosure.
This application is a continuation of U.S. patent application Ser. No. 17/497,176, filed Oct. 8, 2021, which is continuation of U.S. patent application Ser. No. 17/141,220, filed Jan. 4, 2021, now U.S. Pat. No. 11,157,516, issued Oct. 26, 2021, which is a continuation of U.S. patent application Ser. No. 16/905,599, filed Jun. 18, 2020, now U.S. Pat. No. 11,010,407, issued May 18, 2021, which is a continuation of U.S. patent application Ser. No. 16/378,371, filed Apr. 8, 2019, now U.S. Pat. No. 11,106,696, issued Aug. 31, 2021, which is a continuation of U.S. patent application Ser. No. 14,518,826, filed Oct. 20, 2014, now U.S. Pat. No. 10,325,032, issued Jun. 18, 2019, which claims the benefit of U.S. Provisional Application No. 61/941,986, filed Feb. 19, 2014, the disclosures of which are incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5325509 | Lautzenheiser | Jun 1994 | A |
5787466 | Berliner | Jun 1998 | A |
6490590 | Fink | Dec 2002 | B1 |
6757689 | Battas | Jun 2004 | B2 |
7280998 | Aboujaoude | Oct 2007 | B1 |
7519608 | Foster | Apr 2009 | B2 |
7823009 | Tourmasov | Oct 2010 | B1 |
8290972 | Deshmukh et al. | Oct 2012 | B1 |
8341363 | Chou | Dec 2012 | B2 |
8381015 | Kaminski | Feb 2013 | B2 |
8428087 | Vincent | Apr 2013 | B1 |
8516159 | Ananthanarayanan | Aug 2013 | B2 |
8516355 | Gale | Aug 2013 | B2 |
8560887 | Behrendt | Oct 2013 | B2 |
8621145 | Kimmel et al. | Dec 2013 | B1 |
8640137 | Bostic et al. | Jan 2014 | B1 |
8706914 | Duchesneau | Apr 2014 | B2 |
8725875 | Supalov | May 2014 | B2 |
9026624 | Gusev et al. | May 2015 | B2 |
9207984 | Sivasubramanian | Dec 2015 | B2 |
10325032 | Dageville et al. | Jun 2019 | B2 |
11106696 | Dageville et al. | Aug 2021 | B2 |
11157516 | Dageville et al. | Oct 2021 | B2 |
20010039581 | Deng et al. | Nov 2001 | A1 |
20020120630 | Christianson et al. | Aug 2002 | A1 |
20030158884 | Alford, Jr. | Aug 2003 | A1 |
20030177239 | Shinohara et al. | Sep 2003 | A1 |
20040039729 | Boger et al. | Feb 2004 | A1 |
20040167904 | Wen et al. | Aug 2004 | A1 |
20050021758 | White | Jan 2005 | A1 |
20050210049 | Foster | Sep 2005 | A1 |
20060059173 | Hirsch et al. | Mar 2006 | A1 |
20060074872 | Gordon | Apr 2006 | A1 |
20060136354 | Bell et al. | Jun 2006 | A1 |
20070101022 | Schultz et al. | May 2007 | A1 |
20070198656 | Mazzaferri et al. | Aug 2007 | A1 |
20070276861 | Pryce et al. | Nov 2007 | A1 |
20080027788 | Lawrence et al. | Jan 2008 | A1 |
20080027965 | Garret et al. | Jan 2008 | A1 |
20080222346 | Raciborski et al. | Sep 2008 | A1 |
20090043993 | Ford et al. | Feb 2009 | A1 |
20090182836 | Aviles et al. | Jul 2009 | A1 |
20090254516 | Meiyyappan et al. | Oct 2009 | A1 |
20090254532 | Yang et al. | Oct 2009 | A1 |
20090300043 | Maclennan | Dec 2009 | A1 |
20100005054 | Smith et al. | Jan 2010 | A1 |
20100031267 | Maessen et al. | Feb 2010 | A1 |
20100100888 | Tene et al. | Apr 2010 | A1 |
20100145929 | Burger | Jun 2010 | A1 |
20100179940 | Gilder et al. | Jul 2010 | A1 |
20100199042 | Bates | Aug 2010 | A1 |
20110082854 | Eidson | Apr 2011 | A1 |
20110145307 | Ananthanarayanan et al. | Jun 2011 | A1 |
20110161488 | Anderson et al. | Jun 2011 | A1 |
20110225167 | Bhattacharjee et al. | Sep 2011 | A1 |
20110320546 | Holden | Dec 2011 | A1 |
20120005307 | Das et al. | Jan 2012 | A1 |
20120072762 | Atchison et al. | Mar 2012 | A1 |
20120101860 | Ezzat | Apr 2012 | A1 |
20120109888 | Zhang et al. | May 2012 | A1 |
20120110570 | Jacobson | May 2012 | A1 |
20120166771 | Ringseth | Jun 2012 | A1 |
20120173824 | Iyigun et al. | Jul 2012 | A1 |
20120204187 | Breiter et al. | Aug 2012 | A1 |
20120233315 | Hoffman | Sep 2012 | A1 |
20120254443 | Ueda | Oct 2012 | A1 |
20120260050 | Kaliannan | Oct 2012 | A1 |
20120265881 | Chen | Oct 2012 | A1 |
20120296883 | Ganesh et al. | Nov 2012 | A1 |
20120311065 | Ananthanarayanan et al. | Dec 2012 | A1 |
20120323971 | Pasupuleti | Dec 2012 | A1 |
20130007753 | Jain | Jan 2013 | A1 |
20130041871 | Das et al. | Feb 2013 | A1 |
20130124545 | Holmberg et al. | Apr 2013 | A1 |
20130110778 | Taylor et al. | May 2013 | A1 |
20130110961 | Jadhav | May 2013 | A1 |
20130132967 | Soundararajan | May 2013 | A1 |
20130145375 | Kang | Jun 2013 | A1 |
20130151884 | Hsu | Jun 2013 | A1 |
20130174146 | Dasgupta | Jul 2013 | A1 |
20130179574 | Calder | Jul 2013 | A1 |
20130205028 | Crockett et al. | Aug 2013 | A1 |
20130205092 | Roy et al. | Aug 2013 | A1 |
20130218837 | Bhatnagar | Aug 2013 | A1 |
20130227558 | Du | Aug 2013 | A1 |
20130282795 | Tsao | Oct 2013 | A1 |
20130332614 | Brunk | Dec 2013 | A1 |
20140025638 | Hu | Jan 2014 | A1 |
20140059226 | Messerli | Feb 2014 | A1 |
20140095646 | Chan | Apr 2014 | A1 |
20140109095 | Farkash | Apr 2014 | A1 |
20140115091 | Lee | Apr 2014 | A1 |
20140136473 | Faerber | May 2014 | A1 |
20140149461 | Wijayaratne | May 2014 | A1 |
20140164621 | Nakama | Jun 2014 | A1 |
20140222975 | Vasseur et al. | Aug 2014 | A1 |
20150006254 | Fackrell et al. | Jan 2015 | A1 |
20150113120 | Jacobson et al. | Apr 2015 | A1 |
20150363113 | Rahman et al. | Dec 2015 | A1 |
20160127200 | Dippenaar et al. | May 2016 | A1 |
Number | Date | Country |
---|---|---|
102496060 | Jun 2012 | CN |
203261358 | Oct 2013 | CN |
2005-056077 | Mar 2005 | JP |
2009015534 | Jan 2009 | JP |
2010145929 | Jul 2010 | JP |
2011-258119 | Dec 2011 | JP |
2012-198843 | Oct 2012 | JP |
2012208781 | Oct 2012 | JP |
2012-221273 | Nov 2012 | JP |
2013-041397 | Feb 2013 | JP |
2013182509 | Sep 2013 | JP |
2006026659 | Mar 2006 | WO |
2013006157 | Jan 2013 | WO |
2013072232 | May 2013 | WO |
2013084078 | Jun 2013 | WO |
Entry |
---|
Azza Abouzeid et al, “HadoopDB”, Proceedings of the VLDB Endowment, ACM Digital Library, Assoc. of Computing Machinery, New York, NY, vol. 2, No. 1, Aug. 2009. |
Tai Squared et al. “Why is it not advisable to have the database and webserver on the same machine?” Asked Mar. 18, 2009. Accessed Jun. 12, 2018 https://stackoverflow.com/questions/659970/why-is-it-not-advisable-to-have-the-database-and-web-server-on-the-same-machine (Year 2009). |
PCT International Search Report for PCT Application No. PCT/US2015/016425, 2 pgs. (dated May 15, 2015). |
PCT Written Opinion for PCT Application No. PCT/US2015/016425, 7 pgs. (dated May 15, 2015). |
PCT International Preliminary Report for PCT Application No. PCT/US2015/016425, 8 pgs. (dated Aug. 23, 2016). |
Examination Report for counterpart Australian Patent Application No. 2015219117, 3 pgs. (dated Sep. 17, 2019). |
First Office Action for counterpart Chinese Patent Application No. 201580020410.5, 13 pgs. (dated Mar. 2, 2019). |
Extended European Search Report for counterpart EP Patent Application No. 15752570.0, 9 pgs. (dated Oct. 25, 2017). |
First Office Action for counterpart EP Patent Application No. 15752570.0, 9 pgs. (dated Aug. 6, 2019). |
First Office Action for counterpart JP Patent Application No. 2016-553025 with English translation, 8 pgs. (dated Mar. 5, 2019). |
Second Office Action for counterpart JP Patent Application No. 2016-553025 with English translation, 7 pgs. (dated Nov. 5, 2019). |
Sergey Melnik et al., “Dremel: Interactive Analysis of WebScale Datasets,” Proceedings of the VLDB Endowment, vol. 3, Jan. 1, 2010, pp. 300-339. |
David A Maluf, et al., “NASA Technology Transfer System,” Space Mission Challenges for Information Technology (SMC-IT), 2011 IEEE Fourth International Conference on, IEEE, Aug. 2, 2011, pp. 111-117. |
Hollman J, et al., “Empirical observations regarding predictability in user access-behavior in a distributed digital library system,” Parallel and Distributed Processing Symposium, Proceedings International, IPDPS 2002, Ft. Lauderdale, FL, USA, Apr. 15-19, 2002; Los Alamitos, CA, USA, IEEE Comput. Soc., US, Apr. 15, 2002, pp. 1-18. |
“Oracle9i Database New Features, Release 2 (9.2),” Mar. 2002, 166 pages. |
Examiner's Report for counterpart CA Application No. 2,939,908 (dated Aug. 17, 2022). |
Tai Squared et al. “Why is it not advisable to have the database and webserver on the same machine?” Asked Mar. 18, 2009; https://stackoverflow.com/questions/659970/why-is-it-not-advisable-to-have-the-database-and-web-server-on-the-same-machine (Mar. 18, 2009). |
Second Office Action for counterpart JP Application No. 2016-553025 (dated Feb. 28, 2019). |
First Examination Report for counterpart AU Application No. 2015219117 (dated Sep. 17, 2019). |
Examiner's Report for counterpart CA Application No. 2,939,908 (dated Dec. 4, 2020). |
Second Examination Report for counterpart AU Application No. 2015219117 (dated Apr. 28, 2020). |
Extended European Search Report for counterpart EP Application No. 20211859.2 (dated Mar. 18, 2021). |
Notice of Rejection for counterpart JP Application No. 2021-017358 (dated Feb. 22, 2022). |
Dean et al., “MapReduce: Simplified Data Processing on Large Clusters,” Communications on the ACM; vol. 51, No. 1; pp. 107-113 (Jan. 2008). |
Number | Date | Country | |
---|---|---|---|
20220156281 A1 | May 2022 | US |
Number | Date | Country | |
---|---|---|---|
61941986 | Feb 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17497176 | Oct 2021 | US |
Child | 17665262 | US | |
Parent | 17141220 | Jan 2021 | US |
Child | 17497176 | US | |
Parent | 16905599 | Jun 2020 | US |
Child | 17141220 | US | |
Parent | 16378371 | Apr 2019 | US |
Child | 16905599 | US | |
Parent | 14518826 | Oct 2014 | US |
Child | 16378371 | US |