A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
The present disclosure relates to storing data and, in particular, to aggregating data for storage in a common repository.
The subject matter discussed in the background section should not be assumed to be prior art merely as a result of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches, which in and of themselves may also be inventions.
In conventional database systems, users access their data resources in one logical database. A user of such a conventional system typically retrieves data from and stores data on the system using the user's own systems. A user system might remotely access one of a plurality of server systems that might in turn access the database system. Data retrieval from the system might include the issuance of a query from the user system to the database system. The database system might process the request for information received in the query and send to the user system information relevant to the request. The rapid and efficient retrieval of accurate information and subsequent delivery of this information to the user system relies on the data in the database system complying with certain constraints. However, this limits the types, sizes, and kinds of data that can be stored in the database.
In order to provide for more types, kinds, and sizes of data, the database can be supplemented with an additional data store to hold other data and additional data. The data can be searchable separately or pointers to the separate data store in the database can be searchable. However, the separate data store adds complexity to modifying, correcting, and updating the database and the data store. This added complexity may interfere with users accessing the database and finding data in the separate data store.
Accordingly, it is desirable to provide techniques to improve performance, security, efficiency, and ease of use of the database system.
In accordance with embodiments, there are provided mechanisms and methods for appending data to large data volumes in a multi-tenant store. These mechanisms and methods for appending data to large data volumes can enable embodiments to provide more reliable and faster maintenance of changing data.
In an embodiment and by way of example, a method for appending data to large data volumes is provided. The method embodiment includes receiving new data for a database. The new data is written to a temporary log. The size of the log is compared to a threshold. Then the log is written to a data store, if the size of the log is greater than the threshold.
While one or more implementations and techniques are described with reference to an embodiment in which Methods and Systems for Appending Data to Large Data Volumes in a Multi-Tenant Store is implemented in a system having an application server providing a front end for an on-demand database service capable of supporting multiple tenants, the one or more implementations and techniques are not limited to multi-tenant databases nor deployment on application servers. Embodiments may be practiced using other database architectures, i.e., ORACLE®, DB2® by IBM and the like without departing from the scope of the embodiments claimed.
Any of the above embodiments may be used alone or together with one another in any combination. The one or more implementations encompassed within this specification may also include embodiments that are only partially mentioned or alluded to or are not mentioned or alluded to at all in this brief summary or in the abstract. Although various embodiments may have been motivated by various deficiencies with the prior art, which may be discussed or alluded to in one or more places in the specification, the embodiments do not necessarily address any of these deficiencies. In other words, different embodiments may address different deficiencies that may be discussed in the specification. Some embodiments may only partially address some deficiencies or just one deficiency that may be discussed in the specification, and some embodiments may not address any of these deficiencies.
In the following drawings like reference numbers are used to refer to like elements. Although the following figures depict various examples of the invention, the invention is not limited to the examples depicted in the figures.
General Overview
Systems and methods are provided for appending data to large data volumes. These systems and methods are particularly valuable in the context of a multi-tenant database.
As used herein, the term multi-tenant database system refers to those systems in which various elements of hardware and software of the database system may be shared by one or more customers. For example, a given application server may simultaneously process requests for a great number of customers, and a given database table may store rows for a potentially much greater number of customers. As used herein, the term query plan refers to a set of steps used to access information in a database system.
Next, mechanisms and methods for appending data to large data volumes will be described with reference to example embodiments. In one example implementation, a database served and accessed through a database server is augmented with a separate data store. The separate data store has a simple file structure with directories and metadata such as ZFS (an open source file system developed by Sun Microsystems), although a different, simpler, or more complex file structure can be used depending on the application. The files can be accessed through indices, pointers, or references in the database. In the described examples, the data stores files are write-once files with no append capability and a minimum data size, such as 10 MB. The data store therefore can be used to create a small number of large files. The described techniques can be applied to other data store systems with different configuration and constraints. Other databases and file formats can also be supported utilizing similar techniques.
In the examples described below, data, such as database segments, will be stored in the data store, however, because the data store does not work well with smaller files and cannot append, this data will only be written once there is enough data (perhaps 10 MB) to be aggregated into a larger file. The types of data to be aggregated can include incremental data for the database such as inserts, updates, deletes, and supplemental data. This data can be accumulated in a database table (called, for example AppendLog), and, once the data in the AppendLog reaches the predetermined size limit, it is flushed to the data store.
The database table or tables are subject to queries through application servers. The queries may be in any of a variety of different forms, such as OQL (Object Query Language), SQL (Structured Query Language) queries, individual get-by-id requests, or any other type of query, for example. When queries come to the database server, they need to be performed against both the database server AppendLog and the data store segments. In order for the database to respond to the query, the database servers need an up-to-date copy of the entire AppendLog and the data store segment metadata. Accordingly, the application server, when submitting a query request, can, as part of the request, ensure that the database server has an up-to-date copy of the AppendLog data and the data store segment metadata (cached) prior to forwarding the request to the database server.
Structural Environment
The application server 104 is coupled to a database server 106 which serves or receives the information of the request to or from the application server. The database server includes a table 112 in which the data is stored. This data may contain an append log 114 and a segment file 116. The append log contains some number of smaller files and additional files are appended to it as changes are made to the content of the database. As described below, the append log is eventually consolidated into a single file that is stored and a new append log can be started in its place. The segment file contains metadata about files that are stored in another location. The metadata can include file names, ordering, location, and contents information. Alternatively, the append log and segment file may be stored in another location. The table may be in the form of a conventional relational database or in any other form.
The application server is also coupled to a data store 108, as described above. The data store stores segment files 118 and may also store a variety of other files, depending on the particular use made of the data store. In the described examples, a query or other request from the application server is provided only to the database server. In one example, the files of the data store are not searched. Instead, the table includes searchable pointers or indices to the data in the data store. This allows requests to be serviced more quickly. However, for particular requests, or for particular implementations, the data store may contain the pointers or indices or may be searched directly. As shown, the data store includes stored segment files 118 which may be organized using its file system.
The application server also include an append log 120 and a segment file 122. These allow the application server to track changes and progress to both the append log and the segment file in order to manage the collection, updating, and storing of such data. These may both be saved by the database server, instead, depending upon the application. There may be and typically will be multiple user terminals, application servers, database servers, and data stores. The diagram of
In one example, the database servers are stateless. They locally cache immutable files and have an in-memory cache of immutable data structures. However, they do not have any “on_startup” bootstrapping or any cross-server synchronization of changes, etc. By providing an in-memory cache, the entire database server state (in the database) does not need to be re-created for every request. The in-memory cache allows the tables and other data to be synchronized without any startup process. As described below, incremental changes are handled, while the servers are kept stateless, and the data store file metadata is stored in the database.
Query Handling
In
At block 206, the application server, having received the request, sends it to a database server that can access the database to service the request. The request can contain a current sequence number, so that the database server can ensure that it is working with current data. In this example, the application server accesses a table which provides the most current sequence number for the append log and the segment file. By checking a single authoritative version of the append log and segment file sequence number reference, a single application server can send requests to many different database servers. The database servers are not required to maintain any synchronization.
Referring to
On the other hand, if the sequence numbers do not match, then the database server can request that it be sent the latest updates at block 308. The application server at block 206 listens for a catch up request at block 208. If one is received, then at block 210, the application server builds an append log and segment file to send to the database server. This may then be sent together with the query request at block 212. The response to the catch up request can be a complete replacement of the prior append log and segment file, or to reduce the amount of transmitted data between the two servers, the response can contain only the data added since the database server's version was last updated. The database server, for example, can send its most recent sequence number together with its catch up request. By comparing the database server's most recent number to the absolute most recent version, the application server can determine the differences and send only the differences.
In this example, the application server does not send the append log and segment files or their updates with the original request. This is done to reduce the amount of data sent with a request. However, as an alternative, both files may be maintained in a single version and sent to the respective database server in the latest version with each request. The size of the these files can be kept small through frequent updates of the data store or by generating many files. Each tenant, organization, customer etc. may have several different files for different portions of the database fields.
In one example, both the application server and the database server maintain relational database tables and the append logs and segment files are stored in these tables. The append logs and segment files may be stored in any of a variety of different locations in the database that provide sufficient space for the data. In an Oracle Corporation database a BLOB (Binary Large Object) may be used. The BLOB allows several gigabytes of unstructured storage. In the present example, the data may be structured but it need not be structured in a way that is consistent with the rest of the database.
At block 310, the database server receives the response to the catch up request including a partial or complete append log and segment file. After applying the updates at block 312, by updating or replacing, the database server can then process the request at block 314. At block 316, the results are sent back to the application server. At block 214, the application server receives the results and can then service the user at block 216, with a confirmation, report, or reply, depending on the nature of the request.
In one example, the database is divided into groups, organizations, etc. so that, while there may be many append logs and segment files, each one is not very large. This is one reason why any one database server may not be up to date for any one particular table.
As mentioned above, the sequence number (sequence_number) can be used to manage server state for each organization or database table. The sequence number can be used to represent the current state of any given database server organization or table. In one example, a table, such as Table 1, can be used to track the sequence number. The table may use standard 32-way organization partitioning and be an index organized table, however it may also take other forms. The PK (Primary Key) for the table may be selected as table_id, or any other suitable key.
As indicated in Table 1, the organization identification and the table enumerator are represented as 15 character values, while the sequence number is a number. Any desired string of characters, letter, or number may be used, depending on the application. In the described examples it is either 0 or 1, however a greater range of numbers may be used, depending on the application.
In order to synchronize different databases to the same information, the sequence, number as described above, can be used in all of the different databases in which it occurs. In such an example, the sequence number in any newly visible rows can be checked against an outside source, such as the application server, to ensure that it is greater than all previously visible sequence numbers for that particular organization and table. Based on this check, the integrity of the data can be maintained without an autonomous updating transaction. Instead, the row for this table can be locked and incremented as part of the transaction.
As described, requests to database servers may contain header information in order to convey the current database server state (core_sequence_num, core_append_log, core_segment_file) for the relevant organization, table_ids involved in the operation. (For example, a ‘get’ operation would only need the table being ‘got’, but an OQL request would need an entry for each involved table_id).
In one example, each request from the application server to the database server contains for-each table_id in any one organization, a header structure containing: current_sequence_number (from core_sequence_number); and an optional ‘catch up’ block. The catch up block contains: catchup_from_core_sequence_number; the data from core_segment_file from catchup_sequence_number to current_sequence_number; the data from from core_append_log from catchup_sequence_number to current_sequence_number; and any of the schemas from the core_append_log_schema that are required.
When a database server receives a request, the database server actions can be represented in pseudocode as follows:
Examines its in-memory cache to see what the current sequence_number is for the given (organization_id, table_id). (If there's no entry in the cache, then its current_sequence_number can be set to 0).
If the request's current_sequence_number=cached current_sequence_number then process the request.
If the request's current_sequence_number>cached current_sequence_number
If the optional catchup from block is specified and if its catchup_from_sequence_number<=the cached current_sequence_number, then
Otherwise, if the request's current_sequence_number<cached current_sequence_number
(This is a race-condition state, meaning some other app-server has ‘pushed’ the server state ahead)
If the request's current_sequence_number is still cached (not too old), then process the request with the state as of that sequence number.
Otherwise, send back a DATABASE_SERVER_CATCHUP_REQUIRED specifying the cache current_sequence_number
The application server acting upon a client request, performs the actions described in pseudocode below:
Upon receiving DATABASE_SERVER_SEQUENCE_AHEAD, retry after re-fetching the sequence_number.
Upon receiving DATABASE_SERVER_CATCHUP_REQUIRED, retry the request, but include the ‘catchup’ block built from the sequence_number specified in the DATABASE_SERVER_CATCHUP_REQUIRED failure.
With this protocol, the database server always has an up-to-date copy (cached) of the core_append_log and the core_segment_files prior to executing any request; it is logically equivalent to transferring the entire state with every request, but, for efficiency purposes, the server cache ensures that, on-average, only the new append_log data is sent.
Append Log
In one embodiment, the append log may also be stored in a database table as shown by the example of Table 2.
The raw_data/raw_data_blob part of the table stores the actual data of the append log. Typically data is simply appended to this field as it is developed. The data may be stored in any of a variety of different ways. In one example, the data is stored as Avro serialized binary data. Avro is a data serialization system from the Apache Software Foundation Each time there is an insert, update, or delete, a row or set of rows is added to the field in a compressed form. For batch inserts or updates, an append_log row may be created for each insert with the raw_data/raw_data_blob storing the set of data. While Avro serialization is described herein, any of a variety of other data storing techniques may be used instead or in addition.
At block 406, the application sever locates the cache with which it will service the request. This corresponds to the core segment file and the core append log. If there is only one application server, the core files may reside with the application server, but they may also reside in another location.
At block 408, the application server, upon receiving the request, modifies the data in the cache based on the request. This data will be in the append log. The application server, accordingly, also increments the data sequence number for the particular append log at block 410. If appropriate for the applicable protocols, the application server can then reply or confirm to the requestor that the request has been fulfilled at block 412.
As described in
In another example, a database server may also receive an updated append log, or append log portion and sequence number for its append log, so that versions can be tracked between different database servers and between the database server and the application server.
In one example, all of the changes to the data are made by adding one or more additional rows to the append logs. These can be stored in a free form unstructured field of a database, such as a BLOB field, or in some other easily accessible location. The application servers maintain the current version of the append log and send updates to database servers when they are needed. The append log may be highly structured, but in the described examples, it is not.
To structure the data, periodically, the append log is processed to apply a usable structure. In one example, this happens when the append log becomes large enough to write to the data store. When the append log becomes large enough, the append log is rewritten and formatted to generate a new segment file. The segment file is then written to the data store. However, the segment file could be used by the application server without being written to the data store as new data accumulates further. Alternatively, the data could be processed to form a new formatted append log. Further data changes could then be added to the new append log until the time for generating a new segment file for the data store.
In the described examples, the append log provides a location to which new data may very quickly be added. As a result, queries are not slowed by waiting for new data to be combined or consolidated with older data. The system simply appends the new data to the log and moves on. Because data is added to the append log without any significant processing, there may be additions, deletions, and replacements of particular fields in the same log. In order to use the data to reply to a query, the entire append log can be read to determine the actual status of any particular data value. If, for example, an address is added, and then modified in the append log, then only a complete read of the log will provide the current value. When it comes time to process the append log, the replaced values can be deleted, so that only the last, most current values remain.
In an alternative configuration, the append log may be maintained as it is created. In that configuration, an address change would not simply be appended, but compared to any other previous entries, so that the earlier values can be changed. This requires more processing and analysis and may delay access to the data, however, it reduces the need to reformat the append log later.
Optimization
The determined size, time, or number is then compared to an appropriate threshold. If the threshold has been reached, then at block 506, the application server generates and sends an optimization request. Otherwise the application server will wait or count and repeat the analysis and threshold comparison. In one example, the threshold is selected as a desired size in bytes for writing a file to the data store. The data store may have a minimum file size by design, or the system may be configured to limit the file size of files in the data store in order to reduce the total number of files. Alternatively, a threshold may be used to limit the number or frequency of optimization routines.
The optimization request is sent to the unit that will perform the optimization. This may be any unit from, for example,
The application server may include a sequence number for the most current append log and other information as shown in Tables 3 and 4 in its optimization request. The database server, upon receiving the optimization request from the application server can compare its sequence number for the append log to the received sequence number at block 508. If the database server does not have the current version of the append log, then at block 510 it sends a catch up request back to the application server. The catch up request is a request for the latest version with the sequence number that matches the sequence number received from the application server. The application server will respond to the request and at block 512, the database server receives the most current version of the append log.
Once the database server has the most recent version of the append log, it can then perform the optimization. The optimization is a process that converts the many entries appended together to form the append log into a single segment file with a structured format. To do so, the database server can read all of the entries in the log compare them and rewrite them as a single set of entries containing only the most current data in the log. The entries can also be organized and sorted for more efficient search, retrieval and modification later.
The optimization process at block 514 may be performed by reading all of the files at depth 0 and 1. This is typically all of the files in the append log. The application server can then rewrite all of the files as depth 0 files, delete the existing files, and then write the rewritten files into a new segment file. The new segment file can then be written into the main cache at block 516 to become the new core segment file. The sequence number for the new segment file can be incremented at block 518. The new segment file can also be written into the data store at block 520. The sequence number at the data store can also be incremented or updated to reflect the completion of the operation at block 522.
The segment file can be a file that is created and completed by a single optimization operation. At the next optimization operation, a new segment file is then created. Alternatively, the segment file can be updated with each optimization. The segment file can then be a set of rows, segments, or sub-files. Each change to the segment file can be accompanied by a change to a sequence number so that versions can be tracked as with the append log.
The optimization can be described in pseudo-code as follows:
The application server sends a request to a database server to start optimizing at depth=0.
The database server processes the request (reading all files at depth=0 and depth=1, and rewriting the combination as a new set of depth=0 files).
Once done, the application server deletes any existing core_segment_files for the organization/table_id and depth=0 or depth=1, then writes 1 row per newly created files into core_segment_file where
After that, the core_segment_file, for this organization/table_id will contain the newly created files at depth 0, and no files at depth 1 (since those were optimized into depth 0).
The sequence number for each of the new rows will be the same number. The lock to increment the sequence number can be taken at the last possible instant before commit.
The operations and structures described above may be implemented in a variety of different systems and environments.
System Overview
Environment 610 is an environment in which an on-demand database service exists. User system 612 may be any machine or system that is used by a user to access a database user system. For example, any of user systems 612 can be a handheld computing device, a mobile phone, a laptop computer, a work station, and/or a network of computing devices. As illustrated in
An on-demand database service, such as system 616, is a database system that is made available to outside users that do not need to necessarily be concerned with building and/or maintaining the database system, but instead may be available for their use when the users need the database system (e.g., on the demand of the users). Some on-demand database services may store information from one or more tenants stored into tables of a common database image to form a multi-tenant database system (MTS). Accordingly, “on-demand database service 616” and “system 616” will be used interchangeably herein. A database image may include one or more database objects. A relational database management system (RDMS) or the equivalent may execute storage and retrieval of information against the database object(s). Application platform 618 may be a framework that allows the applications of system 616 to run, such as the hardware and/or software, e.g., the operating system. In an embodiment, on-demand database service 616 may include an application platform 618 that enables creation, managing and executing one or more applications developed by the provider of the on-demand database service, users accessing the on-demand database service via user systems 612, or third party application developers accessing the on-demand database service via user systems 612.
The users of user systems 612 may differ in their respective capacities, and the capacity of a particular user system 612 might be entirely determined by permissions (permission levels) for the current user. For example, where a salesperson is using a particular user system 612 to interact with system 616, that user system has the capacities allotted to that salesperson. However, while an administrator is using that user system to interact with system 616, that user system has the capacities allotted to that administrator. In systems with a hierarchical role model, users at one permission level may have access to applications, data, and database information accessible by a lower permission level user, but may not have access to certain applications, database information, and data accessible by a user at a higher permission level. Thus, different users will have different capabilities with regard to accessing and modifying application and database information, depending on a user's security or permission level.
Network 614 is any network or combination of networks of devices that communicate with one another. For example, network 614 can be any one or any combination of a LAN (local area network), WAN (wide area network), telephone network, wireless network, point-to-point network, star network, token ring network, hub network, or other appropriate configuration. As the most common type of computer network in current use is a TCP/IP (Transfer Control Protocol and Internet Protocol) network, such as the global internetwork of networks often referred to as the “Internet” with a capital “I,” that network will be used in many of the examples herein. However, it should be understood that the networks that the present invention might use are not so limited, although TCP/IP is a frequently implemented protocol.
User systems 612 might communicate with system 616 using TCP/IP and, at a higher network level, use other common Internet protocols to communicate, such as HTTP, FTP, AFS, WAP, etc. In an example where HTTP is used, user system 612 might include an HTTP client commonly referred to as a “browser” for sending and receiving HTTP messages to and from an HTTP server at system 616. Such an HTTP server might be implemented as the sole network interface between system 616 and network 614, but other techniques might be used as well or instead. In some implementations, the interface between system 616 and network 614 includes load sharing functionality, such as round-robin HTTP request distributors to balance loads and distribute incoming HTTP requests evenly over a plurality of servers. At least as for the users that are accessing that server, each of the plurality of servers has access to the MTS' data; however, other alternative configurations may be used instead.
In one embodiment, system 616, shown in
One arrangement for elements of system 616 is shown in
Several elements in the system shown in
According to one embodiment, each user system 612 and all of its components are operator configurable using applications, such as a browser, including computer code run using a central processing unit such as an Intel Pentium® processor or the like. Similarly, system 616 (and additional instances of an MTS, where more than one is present) and all of their components might be operator configurable using application(s) including computer code to run using a central processing unit such as processor system 617, which may include an Intel Pentium® processor or the like, and/or multiple processor units. A computer program product embodiment includes a machine-readable storage medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the embodiments described herein. Computer code for operating and configuring system 616 to intercommunicate and to process webpages, applications and other data and media content as described herein are preferably downloaded and stored on a hard disk, but the entire program code, or portions thereof, may also be stored in any other volatile or non-volatile memory medium or device as is well known, such as a ROM or RAM, or provided on any media capable of storing program code, such as any type of rotating media including floppy disks, optical discs, digital versatile disk (DVD), compact disk (CD), microdrive, and magneto-optical disks, and magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data. Additionally, the entire program code, or portions thereof, may be transmitted and downloaded from a software source over a transmission medium, e.g., over the Internet, or from another server, as is well known, or transmitted over any other conventional network connection as is well known (e.g., extranet, VPN, LAN, etc.) using any communication medium and protocols (e.g., TCP/IP, HTTP, HTTPS, Ethernet, etc.) as are well known. It will also be appreciated that computer code for implementing embodiments of the present invention can be implemented in any programming language that can be executed on a client system and/or server or server system such as, for example, C, C++, HTML, any other markup language, Java™, JavaScript, ActiveX, any other scripting language, such as VBScript, and many other programming languages as are well known may be used. (Java™ is a trademark of Sun Microsystems, Inc.).
According to one embodiment, each system 616 is configured to provide webpages, forms, applications, data and media content to user (client) systems 612 to support the access by user systems 612 as tenants of system 616. As such, system 616 provides security mechanisms to keep each tenant's data separate unless the data is shared. If more than one MTS is used, they may be located in close proximity to one another (e.g., in a server farm located in a single building or campus), or they may be distributed at locations remote from one another (e.g., one or more servers located in city A and one or more servers located in city B). As used herein, each MTS could include one or more logically and/or physically connected servers distributed locally or across one or more geographic locations. Additionally, the term “server” is meant to include a computer system, including processing hardware and process space(s), and an associated storage system and database application (e.g., OODBMS or RDBMS) as is well known in the art. It should also be understood that “server system” and “server” are often used interchangeably herein. Similarly, the database object described herein can be implemented as single databases, a distributed database, a collection of distributed databases, a database with redundant online or offline backups or other redundancies, etc., and might include a distributed database or storage network and associated processing intelligence.
User system 612, network 614, system 616, tenant data storage 622, and system data storage 624 were discussed above in
Application platform 618 includes an application setup mechanism 738 that supports application developers' creation and management of applications, which may be saved as metadata into tenant data storage 622 by save routines 736 for execution by subscribers as one or more tenant process spaces 704 managed by tenant management process 710 for example. Invocations to such applications may be coded using PL/SOQL 734 that provides a programming language style interface extension to API 732. A detailed description of some PL/SOQL language embodiments is discussed in commonly owned U.S. Pat. No. 7,730,478 entitled, METHOD AND SYSTEM FOR ALLOWING ACCESS TO DEVELOPED APPLICATIONS VIA A MULTI-TENANT DATABASE ON-DEMAND DATABASE SERVICE issued Jun. 1, 2010 to Craig Weissman, which is incorporated in its entirety herein for all purposes. Invocations to applications may be detected by one or more system processes, which manages retrieving application metadata 716 for the subscriber making the invocation and executing the metadata as an application in a virtual machine.
Each application server 700 may be communicably coupled to database systems, e.g., having access to system data 625 and tenant data 623, via a different network connection. For example, one application server 7001 might be coupled via the network 614 (e.g., the Internet), another application server 700N-1 might be coupled via a direct network link, and another application server 700N might be coupled by yet a different network connection. Transfer Control Protocol and Internet Protocol (TCP/IP) are typical protocols for communicating between application servers 700 and the database system. However, it will be apparent to one skilled in the art that other transport protocols may be used to optimize the system depending on the network interconnect used.
In certain embodiments, each application server 700 is configured to handle requests for any user associated with any organization that is a tenant. Because it is desirable to be able to add and remove application servers from the server pool at any time for any reason, there is preferably no server affinity for a user and/or organization to a specific application server 700. In one embodiment, therefore, an interface system implementing a load balancing function (e.g., an F5 Big-IP load balancer) is communicably coupled between the application servers 700 and the user systems 612 to distribute requests to the application servers 700. In one embodiment, the load balancer uses a least connections algorithm to route user requests to the application servers 700. Other examples of load balancing algorithms, such as round robin and observed response time, also can be used. For example, in certain embodiments, three consecutive requests from the same user could hit three different application servers 700, and three requests from different users could hit the same application server 700. In this manner, system 616 is multi-tenant, wherein system 616 handles storage of, and access to, different objects, data and applications across disparate users and organizations.
As an example of storage, one tenant might be a company that employs a sales force where each salesperson uses system 616 to manage their sales process. Thus, a user might maintain contact data, leads data, customer follow-up data, performance data, goals and progress data, etc., all applicable to that user's personal sales process (e.g., in tenant data storage 622). In an example of a MTS arrangement, since all of the data and the applications to access, view, modify, report, transmit, calculate, etc., can be maintained and accessed by a user system having nothing more than network access, the user can manage his or her sales efforts and cycles from any of many different user systems. For example, if a salesperson is visiting a customer and the customer has Internet access in their lobby, the salesperson can obtain critical updates as to that customer while waiting for the customer to arrive in the lobby.
While each user's data might be separate from other users' data regardless of the employers of each user, some data might be organization-wide data shared or accessible by a plurality of users or all of the users for a given organization that is a tenant. Thus, there might be some data structures managed by system 616 that are allocated at the tenant level while other data structures might be managed at the user level. Because an MTS might support multiple tenants including possible competitors, the MTS should have security protocols that keep data, applications, and application use separate. Also, because many tenants may opt for access to an MTS rather than maintain their own system, redundancy, up-time, and backup are additional functions that may be implemented in the MTS. In addition to user-specific data and tenant specific data, system 616 might also maintain system level data usable by multiple tenants or other data. Such system level data might include industry reports, news, postings, and the like that are sharable among tenants.
In certain embodiments, user systems 612 (which may be client systems) communicate with application servers 700 to request and update system-level and tenant-level data from system 616 that may require sending one or more queries to tenant data storage 622 and/or system data storage 624. System 616 (e.g., an application server 700 in system 616) automatically generates one or more SQL statements (e.g., one or more SQL queries) that are designed to access the desired information. System data storage 624 may generate query plans to access the requested data from the database.
Each database can generally be viewed as a collection of objects, such as a set of logical tables, containing data fitted into predefined categories. A “table” is one representation of a data object, and may be used herein to simplify the conceptual description of objects and custom objects according to the present invention. It should be understood that “table” and “object” may be used interchangeably herein. Each table generally contains one or more data categories logically arranged as columns or fields in a viewable schema. Each row or record of a table contains an instance of data for each category defined by the fields. For example, a CRM database may include a table that describes a customer with fields for basic contact information such as name, address, phone number, fax number, etc. Another table might describe a purchase order, including fields for information such as customer, product, sale price, date, etc. In some multi-tenant database systems, standard entity tables might be provided for use by all tenants. For CRM database applications, such standard entities might include tables for Account, Contact, Lead, and Opportunity data, each containing pre-defined fields. It should be understood that the word “entity” may also be used interchangeably herein with “object” and “table”.
While the invention has been described by way of example and in terms of the specific embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications and similar arrangements as would be apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
This continuation application is related to, and claims priority to, U.S. patent application Ser. No. 12/961,809 entitled “Methods and Systems for Appending Data to Large Data Volumes in a Multi-Tenant Store,” filed Dec. 7, 2010, which claims the benefit of U.S. Provisional Patent Application No. 61/324,955 entitled Methods and Systems for Appending Data to Large Data Volumes in a Multi-Tenant Store, by Eidson et al., filed Apr. 16, 2010, the entire contents of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5577188 | Zhu | Nov 1996 | A |
5608872 | Schwartz et al. | Mar 1997 | A |
5649104 | Carleton et al. | Jul 1997 | A |
5715450 | Ambrose et al. | Feb 1998 | A |
5761419 | Schwartz et al. | Jun 1998 | A |
5819038 | Carleton et al. | Oct 1998 | A |
5821937 | Tonelli et al. | Oct 1998 | A |
5831610 | Tonelli et al. | Nov 1998 | A |
5873096 | Lim et al. | Feb 1999 | A |
5918159 | Fomukong et al. | Jun 1999 | A |
5963953 | Cram et al. | Oct 1999 | A |
6014674 | McCargar | Jan 2000 | A |
6092083 | Brodersen et al. | Jul 2000 | A |
6169534 | Raffel et al. | Jan 2001 | B1 |
6178425 | Brodersen et al. | Jan 2001 | B1 |
6189011 | Lim et al. | Feb 2001 | B1 |
6216135 | Brodersen et al. | Apr 2001 | B1 |
6233617 | Rothwein et al. | May 2001 | B1 |
6266669 | Brodersen et al. | Jul 2001 | B1 |
6295530 | Ritchie et al. | Sep 2001 | B1 |
6321239 | Shackelford | Nov 2001 | B1 |
6324568 | Diec | Nov 2001 | B1 |
6324693 | Brodersen et al. | Nov 2001 | B1 |
6336137 | Lee et al. | Jan 2002 | B1 |
D454139 | Feldcamp | Mar 2002 | S |
6367077 | Brodersen et al. | Apr 2002 | B1 |
6393605 | Loomans | May 2002 | B1 |
6405220 | Brodersen et al. | Jun 2002 | B1 |
6434550 | Warner et al. | Aug 2002 | B1 |
6446089 | Brodersen et al. | Sep 2002 | B1 |
6535909 | Rust | Mar 2003 | B1 |
6549908 | Loomans | Apr 2003 | B1 |
6553563 | Ambrose et al. | Apr 2003 | B2 |
6560461 | Fomukong et al. | May 2003 | B1 |
6574635 | Stauber et al. | Jun 2003 | B2 |
6577726 | Huang et al. | Jun 2003 | B1 |
6601087 | Zhu et al. | Jul 2003 | B1 |
6604117 | Lim et al. | Aug 2003 | B2 |
6604128 | Diec | Aug 2003 | B2 |
6609150 | Lee et al. | Aug 2003 | B2 |
6621834 | Scherpbier et al. | Sep 2003 | B1 |
6654032 | Zhu et al. | Nov 2003 | B1 |
6665648 | Brodersen et al. | Dec 2003 | B2 |
6665655 | Warner et al. | Dec 2003 | B1 |
6684438 | Brodersen et al. | Feb 2004 | B2 |
6711565 | Subramaniam et al. | Mar 2004 | B1 |
6724399 | Katchour et al. | Apr 2004 | B1 |
6728702 | Subramaniam et al. | Apr 2004 | B1 |
6728960 | Loomans | Apr 2004 | B1 |
6732095 | Warshavsky et al. | May 2004 | B1 |
6732100 | Brodersen et al. | May 2004 | B1 |
6732111 | Brodersen et al. | May 2004 | B2 |
6754681 | Brodersen et al. | Jun 2004 | B2 |
6763351 | Subramaniam et al. | Jul 2004 | B1 |
6763501 | Zhu et al. | Jul 2004 | B1 |
6768904 | Kim | Jul 2004 | B2 |
6782383 | Subramaniam et al. | Aug 2004 | B2 |
6804330 | Jones et al. | Oct 2004 | B1 |
6826565 | Ritchie et al. | Nov 2004 | B2 |
6826582 | Chatterjee et al. | Nov 2004 | B1 |
6826745 | Coker et al. | Nov 2004 | B2 |
6829655 | Huang et al. | Dec 2004 | B1 |
6842748 | Warner et al. | Jan 2005 | B1 |
6850895 | Brodersen et al. | Feb 2005 | B2 |
6850949 | Warner et al. | Feb 2005 | B2 |
7194487 | Kekre | Mar 2007 | B1 |
7340411 | Cook | Mar 2008 | B2 |
7620655 | Larsson et al. | Nov 2009 | B2 |
7882304 | English et al. | Feb 2011 | B2 |
9081805 | Stamen et al. | Jul 2015 | B1 |
20010044791 | Richter et al. | Nov 2001 | A1 |
20020022986 | Coker et al. | Feb 2002 | A1 |
20020029161 | Brodersen et al. | Mar 2002 | A1 |
20020029376 | Ambrose et al. | Mar 2002 | A1 |
20020035577 | Brodersen et al. | Mar 2002 | A1 |
20020042264 | Kim | Apr 2002 | A1 |
20020042843 | Diec | Apr 2002 | A1 |
20020072951 | Lee et al. | Jun 2002 | A1 |
20020082892 | Raffel et al. | Jun 2002 | A1 |
20020112135 | Playe | Aug 2002 | A1 |
20020129352 | Brodersen et al. | Sep 2002 | A1 |
20020140731 | Subramaniam et al. | Oct 2002 | A1 |
20020143997 | Huang et al. | Oct 2002 | A1 |
20020152102 | Brodersen et al. | Oct 2002 | A1 |
20020161734 | Stauber et al. | Oct 2002 | A1 |
20020162090 | Parnell et al. | Oct 2002 | A1 |
20020165742 | Robins | Nov 2002 | A1 |
20030004971 | Gong et al. | Jan 2003 | A1 |
20030009477 | Wilding et al. | Jan 2003 | A1 |
20030018705 | Chen et al. | Jan 2003 | A1 |
20030018830 | Chen et al. | Jan 2003 | A1 |
20030066031 | Laane | Apr 2003 | A1 |
20030066032 | Ramachandran et al. | Apr 2003 | A1 |
20030069874 | Hertzog et al. | Apr 2003 | A1 |
20030069936 | Warner et al. | Apr 2003 | A1 |
20030070000 | Coker et al. | Apr 2003 | A1 |
20030070004 | Mukundan et al. | Apr 2003 | A1 |
20030070005 | Mukundan et al. | Apr 2003 | A1 |
20030074418 | Coker | Apr 2003 | A1 |
20030088545 | Subramaniam et al. | May 2003 | A1 |
20030115145 | Lee et al. | Jun 2003 | A1 |
20030120675 | Stauber et al. | Jun 2003 | A1 |
20030151633 | George et al. | Aug 2003 | A1 |
20030159136 | Huang et al. | Aug 2003 | A1 |
20030182101 | Lambert | Sep 2003 | A1 |
20030187921 | Diec | Oct 2003 | A1 |
20030189600 | Gune et al. | Oct 2003 | A1 |
20030191743 | Brodersen et al. | Oct 2003 | A1 |
20030204427 | Gune et al. | Oct 2003 | A1 |
20030206192 | Chen et al. | Nov 2003 | A1 |
20030225730 | Warner et al. | Dec 2003 | A1 |
20040001092 | Rothwein et al. | Jan 2004 | A1 |
20040010489 | Rio | Jan 2004 | A1 |
20040015981 | Coker et al. | Jan 2004 | A1 |
20040027388 | Berg et al. | Feb 2004 | A1 |
20040128001 | Levin et al. | Jul 2004 | A1 |
20040186860 | Lee et al. | Sep 2004 | A1 |
20040193510 | Catahan et al. | Sep 2004 | A1 |
20040199489 | Barnes-Leon et al. | Oct 2004 | A1 |
20040199536 | Leon et al. | Oct 2004 | A1 |
20040199543 | Braud et al. | Oct 2004 | A1 |
20040249854 | Barnes-Leon et al. | Dec 2004 | A1 |
20040260534 | Pak et al. | Dec 2004 | A1 |
20040260659 | Chan et al. | Dec 2004 | A1 |
20040267809 | East et al. | Dec 2004 | A1 |
20040268299 | Lei et al. | Dec 2004 | A1 |
20050050555 | Exley et al. | Mar 2005 | A1 |
20050091098 | Brodersen et al. | Apr 2005 | A1 |
20050234906 | Ganti et al. | Oct 2005 | A1 |
20050234973 | Zeng et al. | Oct 2005 | A1 |
20070078886 | Rivette | Apr 2007 | A1 |
20070143299 | Huras et al. | Jun 2007 | A1 |
20070143365 | D'Souza et al. | Jun 2007 | A1 |
20070156650 | Becker | Jul 2007 | A1 |
20070220059 | Lu et al. | Sep 2007 | A1 |
20080313156 | Hirahara | Dec 2008 | A1 |
20090037492 | Baitalmal et al. | Feb 2009 | A1 |
20100010967 | Muller | Jan 2010 | A1 |
20100318812 | Auradkar | Dec 2010 | A1 |
20110082716 | Bhamidipaty et al. | Apr 2011 | A1 |
20110225122 | Denuit | Sep 2011 | A1 |
Number | Date | Country | |
---|---|---|---|
20190079963 A1 | Mar 2019 | US |
Number | Date | Country | |
---|---|---|---|
61324955 | Apr 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12961809 | Dec 2010 | US |
Child | 16175651 | US |