A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
The current invention relates generally to generating messages for a variety of reasons in a database network system.
The subject matter discussed in the background section should not be assumed to be prior art merely as a result of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches, which in and of themselves may also be inventions.
In conventional database systems, users access their data resources in one logical database. A user of such a conventional system typically retrieves data from and stores data on the system using the user's own systems. A user system might remotely access one of a plurality of server systems that might in turn access the database system. Data retrieval from the system might include the issuance of a query from the user system to the database system. The database system might process the request for information received in the query and send to the user system information relevant to the request.
During use of the aforementioned systems, data is constantly being updated. Typically, this is accomplished by sending a message from a database server system to an endpoint system that prompts such update. In situations where such messages are automatically triggered in a blanket manner (e.g. for every change made, etc.), the number of messages being sent can quickly grow, thereby causing bandwidth problems, etc. While associated server systems often queue such large amount of messages, they typically do so using a single queue which often falls to address growing latency that accompanies a mounting number of messages. There is thus a need for addressing these and/or other issues.
In accordance with embodiments, there are provided mechanisms and methods for selecting amongst a plurality of processes to send a message (e.g. a message for updating an endpoint system, etc.). These mechanisms and methods for selecting amongst a plurality of processes to send a message can enable embodiments to utilize more than one queue for sending such message. The ability of embodiments to provide such multi-process feature can, in turn, prevent latency that typically accompanies a mounting number of messages.
In an embodiment and by way of example, a method for selecting amongst a plurality of processes to send a message is provided. The method embodiment includes detecting a trigger for automatically sending a message in association with a subscriber of an on-demand database service. In use, message information is retrieved from a portion of a database being managed by the on-demand database service. Further, at least one of a plurality of processes is selected for sending the message and the message information.
While the present invention is described with reference to an embodiment in which techniques for selecting amongst a plurality of processes to send a message are implemented in a system having an application server providing a front end for an on-demand database service capable of supporting multiple tenants, the present invention is not limited to multi-tenant databases nor deployment on application servers. Embodiments may be practiced using other database architectures, i.e., ORACLE®, DB2® by IBM and the like without departing from the scope of the embodiments claimed.
Any of the above embodiments may be used alone or together with one another in any combination. Inventions encompassed within this specification may also include embodiments that are only partially mentioned or alluded to or are not mentioned or alluded to at all in this brief summary or in the abstract. Although various embodiments of the invention may have been motivated by various deficiencies with the prior art, which may be discussed or alluded to in one or more places in the specification, the embodiments of the invention do not necessarily address any of these deficiencies. In other words, different embodiments of the invention may address different deficiencies that may be discussed in the specification. Some embodiments may only partially address some deficiencies or just one deficiency that may be discussed in the specification, and some embodiments may not address any of these deficiencies.
In the following drawings like reference numbers are used to refer to like elements. Although the following figures depict various examples of the invention, the invention is not limited to the examples depicted in the figures.
Systems and methods are provided for selecting amongst a plurality of processes to send a message (e.g. a message for updating an endpoint system, etc.).
During use of database systems, data is constantly being updated. Typically, this is accomplished by sending a message from a database server system to an endpoint system that prompts such update. The number of messages being sent can quickly grow, particularly in situations where such messages are automatically triggered in a blanket manner (e.g. for every change made, etc.), thereby causing bandwidth problems, etc.
Thus, mechanisms and methods are provided for selecting amongst a plurality of processes to send a message which can, in turn, enable embodiments to utilize more than one queue for sending such message. The ability of embodiments to provide such multi-process feature can, in turn, prevent latency that typically accompanies a mounting number of messages.
Next, mechanisms and methods for providing such ability to select amongst a plurality of processes to send a message will be described with reference to example embodiments.
In the context of the present description, the message may include any data structure that is capable of being used to communicate message information in the manner set forth below. In one possible embodiment, the purpose of the message may be to update a database (e.g. of an endpoint system, etc.) associated with the subscriber. For example, the message may reflect at least one change (e.g. an addition, deletion, modification, etc.) to be made to such database. In another possible embodiment, the message may be generated utilizing a web service definition language (WSDL) including customized fields, etc. Further, the trigger may include any event that results in the message being automatically sent.
Next, in operation 104, message information is retrieved from a portion of a database being managed by the on-demand database service. In the aforementioned embodiment where the on-demand database service includes a multi-tenant database system, the foregoing portion of the database may be that which corresponds to the particular subscriber associated with the trigger. In the present description, the message information may include any data capable of being stored by the database being managed by the on-demand database service.
At least one of a plurality of processes is further selected for sending the message and the message information. See operation 106. Such processes may include any processes that are capable of resulting in the message being sent. In one embodiment, the message and/or the message information (or any other data structure) may be queued in one or more queues associated with the particular selected process.
Further, in various embodiments, the processes may differ in at least one respect. For example, in one embodiment, a first one of the processes may be capable of processing the message faster than a second one of the processes. As an option, the processes may or may not result in the message and associated information being sent over a network (e.g. the Internet, etc.). As an option, the message may be sent using any desired protocol including, but certainly not limited to a simple object access protocol (SOAP), TCP/IP, HTTPS, etc.
In different embodiments, the selection of the processes may be based on any desired criteria. For example, such criteria may be a function of the subscriber, the service, the message (e.g. type, size, etc.), etc. Additional information regarding various examples of such criteria will be set forth later during the description of different embodiments illustrated in subsequent figures.
In any case, by providing multiple processes by which the message may be sent, the method 100 may be more apt to manage a larger number of messages. This may, in turn, help prevent latency that typically accompanies a mounting number of messages.
More illustrative information will now be set forth regarding various optional architectures and features with which the foregoing framework may or may not be implemented, per the desires of the user. It should be strongly noted that the following information is set forth for illustrative purposes and should not be construed as limiting in any manner. Any of the following features may be optionally incorporated with or without the exclusion of other features described.
As shown, a workflow rule 202 is included for triggering a message to be sent. In one embodiment, the workflow rule 202 may initiate such trigger as a result of a change to data in a database of an associated on-demand database service. As will soon become apparent, it may be desired to propagate such change in the database to a client or server system of the relevant subscriber, by sending the associated message.
In one embodiment, the aforementioned triggering may be governed by a plurality of rules that may or may not be subscriber-configurable. Additional descriptions relating to exemplary operation of the workflow rule 202 will be set forth during the description of a different embodiment illustrated in
As further shown, a database table 204 is provided. In response to the trigger of the workflow rule 202, the database table 204 is populated with various information that, in turn, enables the creation of the appropriate message. More information regarding one possible database table will be set forth during the description of a different embodiment illustrated in
Further included is logic 206 for interfacing with the database table 204. Such logic 206 serves to retrieve and update information in the database table 204 for the purpose of selecting among a plurality of processes to generate the appropriate message. In one embodiment, such logic 206 may be implemented in the context of a batch server or the like. More information regarding exemplary operation of the logic 206 will be set forth during the description of a different embodiment shown in
Further, the logic 206 performs this task under the direction of an action table 208. To accomplish this, the action table 208 may include various information, rules, etc. that indicate which of the processes should be selected under which conditions, etc. Additional information regarding one possible action table 208 will be set forth during the description of a different embodiment illustrated in
To accomplish this, the logic 206 feeds work items to a plurality of queues 210, 214 that feed respective processes 212, 216. As shown, the processes 212, 216 may include a slow process 212 and a fast process 216. While two processes 212, 216 are shown, it should be noted that other embodiments are contemplated equipped with more processes. For example, additional processes that exhibit different speeds or other characteristics may be provided.
As will soon become apparent, the processes 212, 216 serve to generate and send messages and associated information to a remote subscriber device for the purpose of updating data thereon. Further, for reasons that will soon become apparent, a watermark 218 is provided in conjunction with each of the queues 210, 214 to indicate when work items have fallen below a predetermined number. More information regarding exemplary operation of the processes 212, 216 will be set forth during the description of a different embodiment shown in
As illustrated, the database table 300 may include a plurality of rows 301 that each correspond with an associated message to be sent. In one embodiment, each of the rows 301 may be created and/or populated in response to a trigger prompted by any desired mechanism (e.g. the workflow rule 204 of
As shown, each row 301 may include a subscriber identifier 302 that uniquely identifies a subscriber for which a message is to be sent. Each row 301 may also include a deliverable 304 that indicates various information to be sent in conjunction with the message. For example, the information may include a pointer identifying a location of data that has been changed in an associated database. In various embodiments, each row 301 may reflect a single change or a large number of changes (for consolidation of messages, etc.). As an option, each of the rows may be exposed (via a user interface, etc.) to the corresponding subscriber, so that the subscriber can view the relevant contents, and even delete desired rows.
For reasons that will soon become apparent, a status 306 is also stored in association with each message-specific row. Such status 306 may indicate, for instance, whether the associated message has being assigned to a process (e.g. the processes 212, 214) for generation/transmission, etc. In another possible embodiment, the status 306 need not necessarily be included in the database table 300 and, instead, two or more database tables may be included. Specifically, a first database table may be populated by the workflow rule(s), and a second pending database table may be used to retrieve rows from the first database table to operate upon.
As shown, the action table 400 may include a plurality of rows 401 that each correspond with an associated subscriber. In use, various logic (e.g. logic 206 of
To accomplish this, each of the rows 401 includes a subscriber identifier 402 as well as what information (see item 404) should be sent with each message. Such information may vary on a subscriber-by-subscriber basis, and thus may be dictated accordingly. Further included is one or more triggering rules 406 to indicate under what conditions a message work item should be generated and assigned to an associated process. It should be noted that the triggering rules 406 may or may not necessarily be the same as the triggers that initiated the message generation procedure (e.g. by the workflow rule 204 of
Still yet, the rows 401 may each further include a process indicator 408 for identifying which of the processes should receive the work item (and associated information per item 404) when appropriate per the triggering rule(s) 406. For example, in the context of the system 200 of
For example, in one embodiment, each of the subscribers may be initially assigned, by default, to a fast process. Then, depending on various subscriber-specific performance factors, the subscribers may be later assigned to the slow process, by updating the process indicator 408 accordingly. For instance, a subscriber may be eligible for such a change if a system of such subscriber fails to respond to/acknowledge messages, etc. within a predetermined amount of time (with such happening a threshold number of times, etc.). In such case, the subscriber may be thereafter assigned to the slow process. In one embodiment, all of the messages associated with the particular subscriber may be assigned to the slow process in the event that one or more of the messages cause the foregoing threshold to be met. Thus, in various embodiments, the appropriate process may be selected based on a subscriber of the on-demand database service. Specifically, the process may be selected based on a past behavior (e.g. response time, etc.) of the subscriber of the on-demand database service, etc.
Of course, a mechanism may be put in place to allow the subscribers to earn back their fast process status. As will soon become apparent, such feature ensures that subscribers that negatively impact a speed of the fast process may thus be assigned to a slow process.
Still yet, various other information may be included in the action table 400, such as a user description 410 as well as a destination 412 to which the message is to be sent. Such destination 412 may, in one embodiment, include a uniform resource locator (URL) associated with the particular subscriber indicated by the subscriber identifier 402.
As shown, it may be determined first if a change has been made to a database associated with an on-demand database service. See decision 502. In various embodiments, such change may reflect an addition of data, a deletion of data, a modification of data, etc. Further, such change may result from any manual and/or automatic process that may or may not be subscriber-specific.
If it is determined that a change has indeed been made per decision 502, it is then determined whether a message is required. See decision 504. In one embodiment, a message may not necessarily be generated for each change. For example, in certain circumstances, a message may not necessarily be generated for certain types of changes (e.g. those of less importance/relevance, those generated by certain users, etc.). In various embodiments, the foregoing circumstances may be configured by the subscriber, utilizing a user interface. In another embodiment, such decision may be dictated by various triggering rules associated with an action table (e.g. triggering rules 406 of the action table 400 of
Further, such decision 504 may be different for different subscribers. In other words, a plurality of the triggers may be different for a plurality of different subscribers of the on-demand database service. To enforce such subscriber-specific decision making process, each change may be authenticated with a subscriber identifier.
Thus, if it is determined a change is made per decision 502 and that such change warrants the generation of a message per decision 504, a database table (e.g. database table 204 of
It is first determined in decision 602 whether a queue (e.g. queues 210, 214 of
If it is determined in decision 602 that a queue will soon be empty, a batch of additional rows may be pulled from a database table (e.g. database table 204 of
Next, as an option, messages represented by the batch of rows may be grouped according to an associated subscriber of the on-demand database service. See operation 606. In one embodiment, a subscriber identifier (e.g. subscriber identifier 302 of
To this end, each message work item may be stored in a queue associated with the appropriate process, in response to the selection thereof. Specifically, based on a process associated with each subscriber, the relevant message work items may be assigned to and stored in the appropriate (e.g. fast/slow) queue, before being processed. Note operation 608. Once assigned to the appropriate queue, the work items are then ready for processing by the corresponding process. More information regarding exemplary operation of such logic will now be set forth, according to embodiment illustrated in
As shown, a work item is retrieved from a queue (e.g. queues 210, 214 of
Armed with such information, the method 700 continues by generating and sending the message. See operation 706. In one embodiment, this may be accomplished by using the associated row to identify the information to be sent with the message.
It is then determined whether an acknowledgement or a failure is received in response to the message. See decision 708. If the acknowledgement is received, the relevant rows associated with the message work item may be removed from the database table, as indicated in operation 712. On the other hand, if the failure is received, tracking information (e.g. status 306 of
In one embodiment, any retries for sending the message may be spaced out to mitigate any negative impact on processing resources. For example, a first retry may be subject to a first delay, and a second retry may be subject to a second delay that is longer than the first delay, and so on. In one embodiment, a subscriber may manually initiate a retry in real-time (for improved ability to perform diagnostics). To avoid any queue overflow, any pending message (to which there is no response) may be simply discarded or re-queued, etc.
In any case, the process further collects data on such responses to the sent message information, for reporting purposes. In one embodiment, such data may be compiled for statistical analysis. See operation 714. As an option, such statistical analysis may serve as historical data to be used to determine which of a plurality of processes is appropriate for a particular subscriber, as set forth earlier.
System Overview
Environment 810 is an environment in which an on-demand database service exists. User system 812 may be any machine or system that is used by a user to access a database user system. For example, any of user systems 812 can be a handheld computing device, a mobile phone, a laptop computer, a work station, and/or a network of computing devices. As illustrated in
An on-demand database service, such as system 816, is a database system that is made available to outside users that do not need to necessarily be concerned with building and/or maintaining the database system, but instead may be available for their use when the users need the database system (e.g., on the demand of the users). Some on-demand database services may store information from one or more tenants stored into tables of a common database image to form a multi-tenant database system (MTS). Accordingly, “on-demand database service 816” and “system 816” will be used interchangeably herein. A database image may include one or more database objects. A relational database management system (RDMS) or the equivalent may execute storage and retrieval of information against the database object(s). Application platform 818 may be a framework that allows the applications of system 816 to run, such as the hardware and/or software, e.g., the operating system. In an embodiment, on-demand database service 816 may include an application platform 818 that enables creation, managing and executing one or more applications developed by the provider of the on-demand database service, users accessing the on-demand database service via user systems 812, or third party application developers accessing the on-demand database service via user systems 812.
The users of user systems 812 may differ in their respective capacities, and the capacity of a particular user system 812 might be entirely determined by permissions (permission levels) for the current user. For example, where a salesperson is using a particular user system 812 to interact with system 816, that user system has the capacities allotted to that salesperson. However, while an administrator is using that user system to interact with system 816, that user system has the capacities allotted to that administrator. In systems with a hierarchical role model, users at one permission level may have access to applications, data, and database information accessible by a lower permission level user, but may not have access to certain applications, database information, and data accessible by a user at a higher permission level. Thus, different users will have different capabilities with regard to accessing and modifying application and database information, depending on a user's security or permission level.
Network 814 is any network or combination of networks of devices that communicate with one another. For example, network 814 can be any one or any combination of a LAN (local area network), WAN (wide area network), telephone network, wireless network, point-to-point network, star network, token ring network, hub network, or other appropriate configuration. As the most common type of computer network in current use is a TCP/IP (Transfer Control Protocol and Internet Protocol) network, such as the global internetwork of networks often referred to as the “Internet” with a capital “I,” that network will be used in many of the examples herein. However, it should be understood that the networks that the present invention might use are not so limited, although TCP/IP is a frequently implemented protocol.
User systems 812 might communicate with system 816 using TCP/IP and, at a higher network level, use other common Internet protocols to communicate, such as HTTP, FTP, AFS, WAP, etc. In an example where HTTP is used, user system 812 might include an HTTP client commonly referred to as a “browser” for sending and receiving HTTP messages to and from an HTTP server at system 816. Such an HTTP server might be implemented as the sole network interface between system 816 and network 814, but other techniques might be used as well or instead. In some implementations, the interface between system 816 and network 814 includes load sharing functionality, such as round-robin HTTP request distributors to balance loads and distribute incoming HTTP requests evenly over a plurality of servers. At least as for the users that are accessing that server, each of the plurality of servers has access to the MTS' data; however, other alternative configurations may be used instead.
In one embodiment, system 816, shown in
One arrangement for elements of system 816 is shown in
Several elements in the system shown in
According to one embodiment, each user system 812 and all of its components are operator configurable using applications, such as a browser, including computer code run using a central processing unit such as an Intel Pentium® processor or the like. Similarly, system 816 (and additional instances of an MTS, where more than one is present) and all of their components might be operator configurable using application(s) including computer code to run using a central processing unit such as processor system 817, which may include an Intel Pentium® processor or the like, and/or multiple processor units. A computer program product embodiment includes a machine-readable storage medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the embodiments described herein. Computer code for operating and configuring system 816 to intercommunicate and to process webpages, applications and other data and media content as described herein are preferably downloaded and stored on a hard disk, but the entire program code, or portions thereof, may also be stored in any other volatile or non-volatile memory medium or device as is well known, such as a ROM or RAM, or provided on any media capable of storing program code, such as any type of rotating media including floppy disks, optical discs, digital versatile disk (DVD), compact disk (CD), microdrive, and magneto-optical disks, and magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data. Additionally, the entire program code, or portions thereof, may be transmitted and downloaded from a software source over a transmission medium, e.g., over the Internet, or from another server, as is well known, or transmitted over any other conventional network connection as is well known (e.g., extranet, VPN, LAN, etc.) using any communication medium and protocols (e.g., TCP/IP, HTTP, HTTPS, Ethernet, etc.) as are well known. It will also be appreciated that computer code for implementing embodiments of the present invention can be implemented in any programming language that can be executed on a client system and/or server or server system such as, for example, C, C++, HTML, any other markup language, Java™, JavaScript, ActiveX, any other scripting language, such as VBScript, and many other programming languages as are well known may be used. (Java™ is a trademark of Sun Microsystems, Inc.).
According to one embodiment, each system 816 is configured to provide webpages, forms, applications, data and media content to user (client) systems 812 to support the access by user systems 812 as tenants of system 816. As such, system 816 provides security mechanisms to keep each tenant's data separate unless the data is shared. If more than one MTS is used, they may be located in close proximity to one another (e.g., in a server farm located in a single building or campus), or they may be distributed at locations remote from one another (e.g., one or more servers located in city A and one or more servers located in city B). As used herein, each MTS could include one or more logically and/or physically connected servers distributed locally or across one or more geographic locations. Additionally, the term “server” is meant to include a computer system, including processing hardware and process space(s), and an associated storage system and database application (e.g., OODBMS or RDBMS) as is well known in the art. It should also be understood that “server system” and “server” are often used interchangeably herein. Similarly, the database object described herein can be implemented as single databases, a distributed database, a collection of distributed databases, a database with redundant online or offline backups or other redundancies, etc., and might include a distributed database or storage network and associated processing intelligence.
User system 812, network 814, system 816, tenant data storage 822, and system data storage 824 were discussed above in
Application platform 818 includes an application setup mechanism 938 that supports application developers' creation and management of applications, which may be saved as metadata into tenant data storage 822 by save routines 936 for execution by subscribers as one or more tenant process spaces 904 managed by tenant management process 910 for example. Invocations to such applications may be coded using PL/SOQL 34 that provides a programming language style interface extension to API 932. A detailed description of some PL/SOQL language embodiments is discussed in commonly owned co-pending U.S. Provisional Patent Application 60/828,192 entitled, PROGRAMMING LANGUAGE METHOD AND SYSTEM FOR EXTENDING APIS TO EXECUTE IN CONJUNCTION WITH DATABASE APIS, by Craig Weissman, filed Oct. 4, 2006, which is incorporated in its entirety herein for all purposes. Invocations to applications may be detected by one or more system processes, which manages retrieving application metadata 916 for the subscriber making the invocation and executing the metadata as an application in a virtual machine.
Each application server 900 may be communicably coupled to database systems, e.g., having access to system data 825 and tenant data 823, via a different network connection. For example, one application server 9001 might be coupled via the network 814 (e.g., the Internet), another application server 900N-1 might be coupled via a direct network link, and another application server 900N might be coupled by yet a different network connection. Transfer Control Protocol and Internet Protocol (TCP/IP) are typical protocols for communicating between application servers 900 and the database system. However, it will be apparent to one skilled in the art that other transport protocols may be used to optimize the system depending on the network interconnect used.
In certain embodiments, each application server 900 is configured to handle requests for any user associated with any organization that is a tenant. Because it is desirable to be able to add and remove application servers from the server pool at any time for any reason, there is preferably no server affinity for a user and/or organization to a specific application server 900. In one embodiment, therefore, an interface system implementing a load balancing function (e.g., an F5 Big-IP load balancer) is communicably coupled between the application servers 900 and the user systems 812 to distribute requests to the application servers 900. In one embodiment, the load balancer uses a least connections algorithm to route user requests to the application servers 900. Other examples of load balancing algorithms, such as round robin and observed response time, also can be used. For example, in certain embodiments, three consecutive requests from the same user could hit three different application servers 900, and three requests from different users could hit the same application server 900. In this manner, system 816 is multi-tenant, wherein system 816 handles storage of, and access to, different objects, data and applications across disparate users and organizations.
As an example of storage, one tenant might be a company that employs a sales force where each salesperson uses system 816 to manage their sales process. Thus, a user might maintain contact data, leads data, customer follow-up data, performance data, goals and progress data, etc., all applicable to that user's personal sales process (e.g., in tenant data storage 822). In an example of a MTS arrangement, since all of the data and the applications to access, view, modify, report, transmit, calculate, etc., can be maintained and accessed by a user system having nothing more than network access, the user can manage his or her sales efforts and cycles from any of many different user systems. For example, if a salesperson is visiting a customer and the customer has Internet access in their lobby, the salesperson can obtain critical updates as to that customer while waiting for the customer to arrive in the lobby.
While each user's data might be separate from other users' data regardless of the employers of each user, some data might be organization-wide data shared or accessible by a plurality of users or all of the users for a given organization that is a tenant. Thus, there might be some data structures managed by system 816 that are allocated at the tenant level while other data structures might be managed at the user level. Because an MTS might support multiple tenants including possible competitors, the MTS should have security protocols that keep data, applications, and application use separate. Also, because many tenants may opt for access to an MTS rather than maintain their own system, redundancy, up-time, and backup are additional functions that may be implemented in the MTS. In addition to user-specific data and tenant-specific data, system 816 might also maintain system level data usable by multiple tenants or other data. Such system level data might include industry reports, news, postings, and the like that are sharable among tenants.
In certain embodiments, user systems 812 (which may be client systems) communicate with application servers 900 to request and update system-level and tenant-level data from system 816 that may require sending one or more queries to tenant data storage 822 and/or system data storage 824. System 816 (e.g., an application server 900 in system 816) automatically generates one or more SQL statements (e.g., one or more SQL queries) that are designed to access the desired information. System data storage 824 may generate query plans to access the requested data from the database.
Each database can generally be viewed as a collection of objects, such as a set of logical tables, containing data fitted into predefined categories. A “table” is one representation of a data object, and may be used herein to simplify the conceptual description of objects and custom objects according to the present invention. It should be understood that “table” and “object” may be used interchangeably herein. Each table generally contains one or more data categories logically arranged as columns or fields in a viewable schema. Each row or record of a table contains an instance of data for each category defined by the fields. For example, a CRM database may include a table that describes a customer with fields for basic contact information such as name, address, phone number, fax number, etc. Another table might describe a purchase order, including fields for information such as customer, product, sale price, date, etc. In some multi-tenant database systems, standard entity tables might be provided for use by all tenants. For CRM database applications, such standard entities might include tables for Account, Contact, Lead, and Opportunity data, each containing pre-defined fields. It should be understood that the word “entity” may also be used interchangeably herein with “object” and “table”.
In some multi-tenant database systems, tenants may be allowed to create and store custom objects, or they may be allowed to customize standard entities or objects, for example by creating custom fields for standard objects, including custom index fields. U.S. patent application Ser. No. 10/817,161, filed Apr. 2, 2004, entitled “Custom Entities and Fields in a Multi-Tenant Database System”, and which is hereby incorporated herein by reference, teaches systems and methods for creating custom objects as well as customizing standard objects in a multi-tenant database system. In certain embodiments, for example, all custom entity data rows are stored in a single multi-tenant physical table, which may contain multiple logical tables per organization. It is transparent to customers that their multiple “tables” are in fact stored in one large table or that their data may be stored in the same table as the data of other customers.
While the invention has been described by way of example and in terms of the specific embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications and similar arrangements as would be apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
This application is continuation of U.S. application Ser. No. 14/252,614, filed Apr. 14, 2014, which is a continuation of U.S. application Ser. No. 13/554,864, filed Jul. 20, 2012, which is a continuation of U.S. application Ser. No. 11/849,036, filed Aug. 31, 2007, which claims the benefit of U.S. Provisional Patent Application No. 60/827,871, filed Oct. 2, 2006, the entire contents of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5577188 | Zhu | Nov 1996 | A |
5608872 | Schwartz et al. | Mar 1997 | A |
5649104 | Carleton et al. | Jul 1997 | A |
5715450 | Ambrose et al. | Feb 1998 | A |
5761419 | Schwartz et al. | Jun 1998 | A |
5819038 | Carleton et al. | Oct 1998 | A |
5821937 | Tonelli et al. | Oct 1998 | A |
5831610 | Tonelli et al. | Nov 1998 | A |
5873096 | Lim et al. | Feb 1999 | A |
5918159 | Fomukong et al. | Jun 1999 | A |
5963953 | Cram et al. | Oct 1999 | A |
6092083 | Brodersen et al. | Jul 2000 | A |
6161149 | Achacoso et al. | Dec 2000 | A |
6169534 | Raffel et al. | Jan 2001 | B1 |
6178425 | Brodersen et al. | Jan 2001 | B1 |
6189011 | Lim et al. | Feb 2001 | B1 |
6216135 | Brodersen et al. | Apr 2001 | B1 |
6233617 | Rothwein et al. | May 2001 | B1 |
6266669 | Brodersen et al. | Jul 2001 | B1 |
6292823 | Hjalmtysson | Sep 2001 | B1 |
6295530 | Ritchie et al. | Sep 2001 | B1 |
6324568 | Diec | Nov 2001 | B1 |
6324693 | Brodersen et al. | Nov 2001 | B1 |
6336137 | Lee et al. | Jan 2002 | B1 |
D454139 | Feldcamp | Mar 2002 | S |
6367077 | Brodersen et al. | Apr 2002 | B1 |
6393605 | Loomans | May 2002 | B1 |
6405220 | Brodersen et al. | Jun 2002 | B1 |
6434550 | Warner et al. | Aug 2002 | B1 |
6446089 | Brodersen et al. | Sep 2002 | B1 |
6535909 | Rust | Mar 2003 | B1 |
6549908 | Loomans | Apr 2003 | B1 |
6553563 | Ambrose et al. | Apr 2003 | B2 |
6560461 | Fomukong et al. | May 2003 | B1 |
6574635 | Stauber et al. | Jun 2003 | B2 |
6577726 | Huang et al. | Jun 2003 | B1 |
6601087 | Zhu et al. | Jul 2003 | B1 |
6604117 | Lim et al. | Aug 2003 | B2 |
6604128 | Diec | Aug 2003 | B2 |
6609150 | Lee et al. | Aug 2003 | B2 |
6621834 | Scherpbier et al. | Sep 2003 | B1 |
6654032 | Zhu et al. | Nov 2003 | B1 |
6665648 | Brodersen et al. | Dec 2003 | B2 |
6665655 | Warner et al. | Dec 2003 | B1 |
6684438 | Brodersen et al. | Feb 2004 | B2 |
6711565 | Subramaniam et al. | Mar 2004 | B1 |
6724399 | Katchour et al. | Apr 2004 | B1 |
6728702 | Subramaniam et al. | Apr 2004 | B1 |
6728960 | Loomans | Apr 2004 | B1 |
6732095 | Warshavsky et al. | May 2004 | B1 |
6732100 | Brodersen et al. | May 2004 | B1 |
6732111 | Brodersen et al. | May 2004 | B2 |
6754681 | Brodersen et al. | Jun 2004 | B2 |
6763351 | Subramaniam et al. | Jul 2004 | B1 |
6763501 | Zhu et al. | Jul 2004 | B1 |
6768904 | Kim | Jul 2004 | B2 |
6772229 | Achacoso et al. | Aug 2004 | B1 |
6782383 | Subramaniam et al. | Aug 2004 | B2 |
6804330 | Jones et al. | Oct 2004 | B1 |
6826565 | Ritchie et al. | Nov 2004 | B2 |
6826582 | Chatterjee et al. | Nov 2004 | B1 |
6826745 | Coker et al. | Nov 2004 | B2 |
6829655 | Huang et al. | Dec 2004 | B1 |
6842748 | Warner et al. | Jan 2005 | B1 |
6850895 | Brodersen et al. | Feb 2005 | B2 |
6850949 | Warner et al. | Feb 2005 | B2 |
7062502 | Kesler | Jun 2006 | B1 |
7069231 | Cinarkaya et al. | Jun 2006 | B1 |
7181758 | Chan | Feb 2007 | B1 |
7289976 | Kihneman et al. | Oct 2007 | B2 |
7340411 | Cook | Mar 2008 | B2 |
7356482 | Frankland et al. | Apr 2008 | B2 |
7401094 | Kesler | Jul 2008 | B1 |
7412455 | Dillon | Aug 2008 | B2 |
7508789 | Chan | Mar 2009 | B2 |
7571211 | Melick | Aug 2009 | B1 |
7620655 | Larsson et al. | Nov 2009 | B2 |
7698160 | Beaven et al. | Apr 2010 | B2 |
7779039 | Weissman et al. | Aug 2010 | B2 |
8015495 | Achacoso et al. | Sep 2011 | B2 |
8082301 | Ahlgren et al. | Dec 2011 | B2 |
8095413 | Beaven | Jan 2012 | B1 |
8095594 | Beaven et al. | Jan 2012 | B2 |
8275836 | Beaven et al. | Sep 2012 | B2 |
8457545 | Chan | Jun 2013 | B2 |
8484111 | Frankland et al. | Jul 2013 | B2 |
20010044791 | Richter et al. | Nov 2001 | A1 |
20020022986 | Coker et al. | Feb 2002 | A1 |
20020029161 | Brodersen et al. | Mar 2002 | A1 |
20020029376 | Ambrose et al. | Mar 2002 | A1 |
20020035577 | Brodersen et al. | Mar 2002 | A1 |
20020042264 | Kim | Apr 2002 | A1 |
20020042843 | Diec | Apr 2002 | A1 |
20020072951 | Lee et al. | Jun 2002 | A1 |
20020082892 | Raffel et al. | Jun 2002 | A1 |
20020129352 | Brodersen et al. | Sep 2002 | A1 |
20020140731 | Subramaniam et al. | Oct 2002 | A1 |
20020143997 | Huang et al. | Oct 2002 | A1 |
20020162090 | Parnell et al. | Oct 2002 | A1 |
20020165742 | Robins | Nov 2002 | A1 |
20030004971 | Gong et al. | Jan 2003 | A1 |
20030018705 | Chen et al. | Jan 2003 | A1 |
20030018830 | Chen et al. | Jan 2003 | A1 |
20030066031 | Laane | Apr 2003 | A1 |
20030066032 | Ramachandran et al. | Apr 2003 | A1 |
20030069936 | Warner et al. | Apr 2003 | A1 |
20030070000 | Coker et al. | Apr 2003 | A1 |
20030070004 | Mukundan et al. | Apr 2003 | A1 |
20030070005 | Mukundan et al. | Apr 2003 | A1 |
20030074418 | Coker | Apr 2003 | A1 |
20030120675 | Stauber et al. | Jun 2003 | A1 |
20030151633 | George et al. | Aug 2003 | A1 |
20030159136 | Huang et al. | Aug 2003 | A1 |
20030187921 | Diec | Oct 2003 | A1 |
20030189600 | Gune et al. | Oct 2003 | A1 |
20030204427 | Gune et al. | Oct 2003 | A1 |
20030206192 | Chen et al. | Nov 2003 | A1 |
20030225730 | Warner et al. | Dec 2003 | A1 |
20030233155 | Slemmer | Dec 2003 | A1 |
20040001092 | Rothwein et al. | Jan 2004 | A1 |
20040010489 | Rio | Jan 2004 | A1 |
20040015981 | Coker et al. | Jan 2004 | A1 |
20040027388 | Berg et al. | Feb 2004 | A1 |
20040128001 | Levin et al. | Jul 2004 | A1 |
20040186860 | Lee et al. | Sep 2004 | A1 |
20040193510 | Catahan et al. | Sep 2004 | A1 |
20040199489 | Barnes-Leon et al. | Oct 2004 | A1 |
20040199536 | Barnes Leon et al. | Oct 2004 | A1 |
20040199543 | Braud et al. | Oct 2004 | A1 |
20040249854 | Barnes-Leon et al. | Dec 2004 | A1 |
20040260534 | Pak et al. | Dec 2004 | A1 |
20040260659 | Chan et al. | Dec 2004 | A1 |
20040268299 | Lei et al. | Dec 2004 | A1 |
20050050555 | Exley et al. | Mar 2005 | A1 |
20050091098 | Brodersen et al. | Apr 2005 | A1 |
20050223022 | Weissman et al. | Oct 2005 | A1 |
20060021019 | Hinton et al. | Jan 2006 | A1 |
20060045092 | Kubsch | Mar 2006 | A1 |
20080001923 | Hall | Jan 2008 | A1 |
20080065702 | Dickerson | Mar 2008 | A1 |
20080249972 | Dillon | Oct 2008 | A1 |
20090063415 | Chatfield et al. | Mar 2009 | A1 |
20090100342 | Jakobson | Apr 2009 | A1 |
20090177744 | Marlow et al. | Jul 2009 | A1 |
20110218958 | Warshavsky et al. | Sep 2011 | A1 |
20110247051 | Bulumulla et al. | Oct 2011 | A1 |
20120042218 | Cinarkaya et al. | Feb 2012 | A1 |
20130218948 | Jakobson | Aug 2013 | A1 |
20130218949 | Jakobson | Aug 2013 | A1 |
20130218966 | Jakobson | Aug 2013 | A1 |
20130247216 | Cinarkaya et al. | Sep 2013 | A1 |
20140359537 | Jackobson et al. | Dec 2014 | A1 |
20150006289 | Jakobson et al. | Jan 2015 | A1 |
20150007050 | Jakobson et al. | Jan 2015 | A1 |
20150095162 | Jakobson et al. | Apr 2015 | A1 |
20150142596 | Jakobson et al. | May 2015 | A1 |
20150172563 | Jakobson et al. | Jun 2015 | A1 |
20150358257 | Jasik | Dec 2015 | A1 |
Number | Date | Country | |
---|---|---|---|
20150358257 A1 | Dec 2015 | US |
Number | Date | Country | |
---|---|---|---|
60827871 | Oct 2006 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14252614 | Apr 2014 | US |
Child | 14829416 | US | |
Parent | 13554864 | Jul 2012 | US |
Child | 14252614 | US | |
Parent | 11849036 | Aug 2007 | US |
Child | 13554864 | US |