This patent document relates generally to cloud computing systems and more specifically to shared resources such as database connections.
“Cloud computing” services provide shared resources, applications, and information to computers and other devices upon request. In cloud computing environments, services can be provided by one or more servers accessible over the Internet rather than installing software locally on in-house computer systems. Users can interact with cloud computing services to undertake a wide range of tasks.
Resources in a cloud computing system are often shared among different computing devices and applications. For example, cloud computing applications being executed on computing devices arranged in a group or pod may be configured to share a number of active connections to a shared database. Shared resources can play a prominent role in cloud computing cost and performance. On the one hand, overprovisioning shared resources can increase expense due to maintaining excess capacity. On the other hand, under provisioning shared resources can result in underperformance as applications and devices compete for scarce resources.
The included drawings are for illustrative purposes and serve only to provide examples of possible structures and operations for the disclosed inventive systems, apparatus, methods and computer program products for allocating shared resource. These drawings in no way limit any changes in form and detail that may be made by one skilled in the art without departing from the spirit and scope of the disclosed implementations.
Techniques and mechanisms described herein relate to the allocation of finite and shared resources within an on-demand computing services environment. An on-demand computing services environment includes one or more computing systems configured to provide computing services via the internet. In many such systems, computing services are provided by application hosts, which are computing servers configured to execute various cloud computing applications.
In some implementations, computing servers share resources within a cloud computing environment. For example, cloud computing applications being executed on servers arranged in a group (also referred to herein as a “pod”) may be configured to share a number of active connections to a shared database. Many of the shared resources held by application hosts are finite. For example, a database system may be capable of supporting only a designated number of database connections. On the other hand, many pods are configured to operate at close to maximum capacity of the shared resource, to avoid the expense associated with overprovisioning, and thereby cannot tolerate a large drop in available resources without adversely impacting or failing customer requests. For these reasons, shared resources can play a prominent role in cloud computing cost and performance.
Resource allocation may prove particularly difficult at points of transition, such as during release events when application hosts are patched or upgraded from an older version to a newer version. One approach to a release event is to first activate a new group of computing servers that feature one or more hardware changes or software releases. Application traffic may then be transitioned from an existing, out-of-date group of computing servers to the new group of computing servers. During releases, it is important to ensure traffic continuity; there should be minimal to no disruption to customer requests as application hosts are upgraded. However, because only a finite amount of the shared resource exists, the resource must be allocated between the two groups of computing servers. Thus, a key to providing this guarantee is ensuring that resources are allocated to the right group of application hosts so that the system can sustainably serve the same volume of customer requests before, during, and after the release.
Consider, for instance, the example of allocating database connections between an old and new group of application hosts. Suppose for the purpose of illustration that the database system can support a maximum of 1,000 concurrent database connections, and that database connections are created automatically based on current traffic load. Suppose further that 990 of the concurrent database connections are currently held by the old group of application hosts, while 10 concurrent database connections are held by the new group of application hosts because they were automatically created in response to initial test traffic. If all incoming customer requests were suddenly routed to the new application hosts, in accordance with conventional techniques, then the new hosts would be unable to sustain these requests because the arrival rate of customer requests would exceed the rate at which database connections could be created. This bottleneck, which may be referred to as a “connection exhaustion incident,” would create a significant performance reduction in responding to the request traffic, which may manifest as delay and/or failure in request response.
Complicating matters further, in some instances traffic needs to be rolled back from the new application hosts to the old application hosts, for instance if a problem with a product release is detected. However, when using conventional techniques, in such a situation the old application hosts may have access to none of the shared resource (e.g., database connections) since those resources were released from the old application hosts after traffic was transferred. Thus, if a rollback were required, then under conventional techniques additional request response delays and/or failures may be created due to the old application hosts lacking the shared resources necessary for responding to the volume of request traffic.
Techniques and mechanisms described herein provide for improved allocation of finite and shared resources. In some implementations, such techniques and mechanisms may be particularly beneficial when transitioning traffic between groups of computing servers to ensure traffic continuity, for instance during release events. That is, techniques and mechanisms described herein may reduce or eliminate disruption to customer requests as request traffic is transitioned between groups of computing devices. In particular, resources may be effectively and efficiently allocated between groups of application hosts so that the system can sustainably serve the same volume of customer requests before, during, and after the transition event.
According to various embodiments, transition intent may be surfaced from an underlying process, such as a release orchestration process, which may allow for improved allocation decisions and reduced or eliminated disruption to customers. For instance, traffic intent for a group of computing devices may be classified into phases such as “live”, “warm up”, “idle”, and “draining.” Allocation decisions may also be tuned at a granular level. Using such tools, factors such as the proportion of resources allocated during a transition event as well as the speed at which reallocations take affect may be adjusted.
According to various embodiments, some techniques and mechanisms are described herein with respect to a shared pool of connections to a database system, for the purpose of illustration. However the techniques and mechanisms described herein are broadly applicable to a wide range of constrained, shared resources. For instance, techniques and mechanisms described herein apply equally well to resources such as connections to message queue brokers, file servers, distributed coordination services (e.g., Zookeeper), or other such systems within a computing services environment.
According to various embodiments, techniques and mechanisms described herein may facilitate intent-based allocation of constrained and shared resources. Intent-based allocation can take into account information such as the current status or phase of a server or server group, one or more configured distribution targets, and/or a configured ramp up rate in order to make resource allocation decisions before, during, and after a request traffic transfer event.
By way of illustration, considered the following concrete example. Suppose that Alexandra, a systems administrator, would like to upgrade a group of application servers. Using conventional techniques, she could deactivate the servers, perform the upgrade, and then reactivate the servers. However, such an approach would leave the servers offline for a lengthy and indeterminate period of time, which would violate service level agreements with customers. Using conventional techniques, she could instead activate a new group of updated application servers, and then suddenly switch all traffic to the new group of updated application servers. However, the new group of updated application servers would not have any active database connections or other shared resources because prior to the switch the new group of updated application servers would not be handling any request traffic. Thus, the new group of updated application servers would not be able to handle the surge in request traffic until the shared resources were slowly released by the old group of application servers and then slowly acquired by the new group of updated application servers. Both approaches would result in a considerable disruption to the services provided by the application servers, the first because the servers were disabled completely and the section from the connection exhaustion that occurs during the transfer between the two server groups.
Suppose instead that in this example Alexandra employed techniques and mechanisms as described herein. For the purpose of illustration, this example focuses on a single shared resource: connections to a shared database. However, as discussed herein, a variety of shared resources may be managed using techniques and mechanisms described herein. For the purposes of this example, assume that the database system is shared among the application servers and can sustain a maximum of 1,000 concurrent, open database connections. Also assume that the rate at which connections ramp up or down is 100 per minute. That is, 100 connections can be re-allocated from one application group to another within a minute.
In this example, prior to the release event, only the old application group is serving customer traffic. As such, all of the available database connections are allocated to the old application group at that time.
At the start of the release the new application group is brought up in the idle phase. Customer requests continue to be routed to the old application group. However, database connections start to be reallocated to the new application group, despite it not yet taking any traffic. In this example, 10% or 100 connections are re-allocated from the old to the new application group. This allocation takes around 1 minute after the new application servers are activated.
At this point, the new application servers may be tested. Once testing on the new application group is complete, the new servers are placed in a warm up phase in preparation for the switch over. This shift to a warm up phase triggers the allocation of additional database connections from the old to the new application group. In this example, an additional 20% of the total connections are re-allocated to the new application group, which takes around 2 minutes to complete.
Once customer traffic is switched over, the new application group enters the live phase while the old application group enters the draining phase. At this time, both application groups are serving customer traffic; incoming customer requests are routed to the new group while existing requests run to completion on the old group. It can take a period of time (e.g., several minutes) for existing requests to drain on the old group but once a request completes, the corresponding database connection is closed and re-allocated to the live group. In this example, it may take around 6 minutes to re-allocate an additional 600 connections to the new application group.
Once the release is complete, the old application group can be kept around in the idle phase in case rollback is required. For example, a 10% vs 90% distribution of database connections between old and new application groups may be maintained.
In the preceding example, Alexandra can transfer request traffic from the old application group to the new application group without resulting in the service disruption that is caused by connection exhaustion. This is because database connections may be at least partially overprovisioned (e.g., by 10%), to ensure service continuity in the event of normal ebbs and flows in request traffic over time. As long as the transfer event occurs at a time when request traffic is relatively low, then the slight overprovisioning can be used as a buffer in combination with intent-based resource reallocation to provide for a smooth transfer of request traffic between the two server groups.
Techniques and mechanisms related to connection pools are also discussed in co-pending and commonly assigned U.S. patent application Ser. No. 16/579,729 by Obembe et al., which is hereby incorporated in its entirety and for all purposes.
A first server group is instructed at 102 to reduce a resource allocation level associated with a network-accessible resource. The instruction may be sent by a centralized control entity such as a systems orchestrator. In response to the instruction, the first server group may reduce the level of resources allocated to the network-accessible resource. For instance, the first server group may reduce the number of active connections to a shared database system.
A second server group is instructed at 104 to increase a resource allocation level associated with the network-accessible resource. As with operation 102, the instruction may be sent by a centralized control entity such as a systems orchestrator. In response to the instruction, the second server group may increase the level of resources allocated to the network-accessible resource. For instance, the second server group may increase the number of active connections to a shared database system.
Request traffic may be transferred from the first server group to the second server group at 106. In some implementations, the traffic may be transferred by a load balancer configured to receive request traffic from remote machines via the internet and route the request traffic to the appropriate server group.
According to various embodiments, the load balancer 222 is configured to receive requests for computing services and to route those requests to a server pod. Each computing pod is configured to respond to computing services requests by executing the requests via applications implemented on application servers. Executing a request may involve accessing data in a database instance, transmitting a message via a messaging service, updating configuration information, generate a user interface, and/or any of a variety of operations capable of being performed by the on-demand computing services environment. For instance, executing the request may include accessing the database instance 226.
According to various embodiments, the orchestrator 224 may manage various configuration information association with the pods. For example, the orchestrator 224 may instruct each pod as to a resource allocation level associated with a shared resource, such as a number of database connections to the database instance 226, As another example, the orchestrator 224 may instruct the load balancer 222 as to the proportion of request traffic to send to various pods.
In this way, the orchestrator may perform operations such as switching active traffic from one pod to another pod. Each of
When a system or application server group is in a “Deactivated” state, it is disabled. For example, the pod B 212 is in a “Deactivated” state.
When a system or application server group is in a “Live” state, it is actively receiving and handling request traffic. For instance, in
When a system or application server group is in an “Idle” state, the system or application group is not expected to take request traffic imminently. For instance, in the context of a release, application servers in an application server group are either in the process of starting up or are being kept alive as hot standby in case rollback is required. For example, in
When a system or application server group is in a “Warm” state, it is expected to take customer traffic soon. For instance, in
When a system or application server group is in a “Draining” state, it is executing previously received requests but is not taking new request traffic. For instance, in
A request to transfer some or all application traffic from a first server group to a second server group is received at 402. In some implementations, the request may be received at a systems orchestrator such as the orchestrator 224 shown in
In particular embodiments, the request may be associated with an upgrade process in which one or more changes are being made to the application execution environment. However, the request may be received in other contexts as well. For example, the request may be received during a process for splitting traffic between the first server group and the second server group. As another example, the request may be received in the context of addressing a hardware failure associated with the first server group.
An instruction to activate the second server group is transmitted at 404. The request may be transmitted to the second server group itself and/or to a controller configured to perform operations such as activating and deactivating various components of the on demand computing services environment.
An instruction to adjust a resource allocation level at the first server group is transmitted at 406. An instruction to adjust a resource allocation level at the second server group is transmitted at 408. An instruction to adjust request traffic distribution between the first and second server group is transmitted at 410, According to various embodiments, the instructions sent at 406-410 may act to transition the server groups between states. Various configuration parameters may be used. For example one configuration parameter governing allocation is the distribution of database connections between the two groups. For instance, the instruction may cause the receiving server group to enter a particular state, such as “Live” or “Deactivated.” Either the state itself or the combination of states across the server groups may then lead to an allocation of the shared resource across the two server groups.
As a concrete example, suppose that the shared resource includes 1,000 connections to the database instance 226 shown in the system 200 in
Another configuration parameter that may be employed is the speed at which shared resources are altered once an allocation instruction has been generated. For instance, if the shared resource is access to a database system, then the speed at which connections are destroyed at one server group and created at the other server group may be varied. Ramping up too quickly on the second server group risks incidence of connection exhaustion on the first server group, which is still serving a vast majority of customer requests. Conversely, ramping up too slowly on the second server group risks incidence of connection exhaustion on the second server group because connection creation cannot keep pace with the arrival rate of customer requests. As with the resource allocation levels, this speed may be strategically determined based on any of a variety of factors, such as the rate at which new connections can be allocated and the duration of the warm-up phase of the transfer.
A determination is made at 412 as to whether to perform an additional adjustment. In some implementations, each of the server groups may transition through successive phases until a stable point is reached. For example, in
In some embodiments, the determination made at 412 may depend at least in part on whether an error has been detected. For instance, if new software and/or hardware has been deployed on the pod B 212, then the pod B 212 may exhibit problems that only become apparent as traffic is transferred to the pod B 212 at scale. In the event that such problems are detected, then the transfer process may effectively be reversed, for instance by rolling back the transfer of the server group states to return the pod A 202 to the live status.
Importantly, the states and connection proportions shown in
In particular embodiments, states may be relatively continuous rather than relatively discrete. For example, the proportion of the shared resources allocated to the different server groups may be continuously shifted
According to various embodiments, operations shown in
A request to adjust one or more resource allocation parameters for a shared resource is received at 502. In some implementations, the request may be received as part of an automated procedure that is executed after resources are transferred between server groups as discussed with respect to the method 400 shown in
Resource utilization information for first and second server groups is identified at 504, In some implementations, the resource utilization information may indicate a proportion of resources allocated to each of the first and second server groups that were actively used during resource transfer. For instance, the resource utilization information may indicate a percentage of database connections assigned to each server group that were in active use, over time, during the period when the first server group was draining and the second server group was live.
Request processing information for first and second server groups is identified at 506. In some implementations, the request processing information may indicate a performance level of the first and second server groups during the resource transfer. For instance, the resource utilization information may indicate a percentage of application requests received by each of the first and second server groups that were delayed or dropped during the period in which the first server group was idle and the second server group was live.
According to various embodiments, resource utilization and/or request processing information may be identified by, for instance, accessing server logs or other records associated with previous resource transfers between server groups. In particular embodiments, resource utilization and/or request processing information may be identified for one transfer event or multiple transfer events.
If one or both server group are constrained, then resource allocation proportions and/or transfer speed are adjusted at 510. Adjusting resource allocation proportions may involve, for example, creating or removing transfer groups, adjusting the proportion of the shared resource assigned to the server groups at different phases of the transfer process, or making any other suitable adjustments.
In some configurations, the method 500 may be implemented as a machine learning algorithm that learns the configuration of resource configuration parameters over time that will provide for transfer of traffic between server groups while maintaining a continuous and high level of service. Accordingly, the method 500 may be executed periodically, at scheduled times, or upon request in order to provide for more accurate resource configuration parameter determination.
In particular embodiments, resource allocation may differ across hardware configuration, software configuration, resource type, resource characteristics, time of day, day of the week, or any other of many various characteristics. Accordingly, information about such characteristics may be included in the input data associated with the machine learning algorithm.
The precise distribution is tunable and can vary depending on the environment since, for example, Salesforce and Oracle databases behave differently. However, allocating resources based on transfer intent and grouping hosts by phases provides the flexibility to tune resource allocation to maintain a high level of service even during traffic transfer and resource reallocation events.
An on-demand database service, implemented using system 616, may be managed by a database service provider. Some services may store information from one or more tenants into tables of a common database image to form a multi-tenant database system (MTS). As used herein, each MTS could include one or more logically and/or physically connected servers distributed locally or across one or more geographic locations. Databases described herein may be implemented as single databases, distributed databases collections of distributed databases, or any other suitable database system. A database image may include one or more database objects. A relational database management system (RDBMS) or a similar system may execute storage and retrieval of information against these objects.
In some implementations, the application platform 618 may be a framework that allows the creation, management, and execution of applications in system 616. Such applications may be developed by the database service provider or by users or third-party application developers accessing the service. Application platform 618 includes an application setup mechanism 638 that supports application developers' creation and management of applications, which may be saved as metadata into tenant data storage 622 by save routines 636 for execution by subscribers as one or more tenant process spaces 654 managed by tenant management process 660 for example. Invocations to such applications may be coded using PL/SOQL 634 that provides a programming language style interface extension to API 632. A detailed description of some PL/SOQL language implementations is discussed in commonly assigned U.S. Pat. No. 7,730,478, titled METHOD AND SYSTEM FOR ALLOWING ACCESS TO DEVELOPED APPLICATIONS VIA A MULTI-TENANT ON-DEMAND DATABASE SERVICE, by Craig Weissman, issued on Jun. 1, 2010, and hereby incorporated by reference in its entirety and for all purposes. Invocations to applications may be detected by one or more system processes. Such system processes may manage retrieval of application metadata 666 for a subscriber making such an invocation. Such system processes may also manage execution of application metadata 666 as an application in a virtual machine.
In some implementations, each application server 650 may handle requests for any user associated with any organization. A load balancing function (e.g., an F5 Big-IP load balancer) may distribute requests to the application servers 650 based on an algorithm such as least-connections, round robin, observed response time, etc. Each application server 650 may be configured to communicate with tenant data storage 622 and the tenant data 623 therein, and system data storage 624 and the system data 625 therein to serve requests of user systems 612. The tenant data 623 may be divided into individual tenant storage spaces 662, which can be either a physical arrangement and/or a logical arrangement of data. Within each tenant storage space 662, user storage 664 and application metadata 666 may be similarly allocated for each user. For example, a copy of a user's most recently used (MRU) items might be stored to user storage 664. Similarly, a copy of MRU items for an entire tenant organization may be stored to tenant storage space 662. A UI 630 provides a user interface and an API 632 provides an application programming interface to system 616 resident processes to users and/or developers at user systems 612.
System 616 may implement a web-based on-demand computing services environment system. For example, in some implementations, system 616 may include application servers configured to implement and execute a variety of software applications. The application servers may be configured to provide related data, code, forms, web pages and other information to and from user systems 612. Additionally, the application servers may be configured to store information to, and retrieve information from a database system. Such information may include related data, objects, and/or Webpage content. With a mufti-tenant system, data for multiple tenants may be stored in the same physical database object in tenant data storage 622, however, tenant data may be arranged in the storage medium(s) of tenant data storage 622 so that data of one tenant is kept logically separate from that of other tenants. In such a scheme, one tenant may not access another tenant's data, unless such data is expressly shared.
Several elements in the system shown in
The users of user systems 612 may differ in their respective capacities, and the capacity of a particular user system 612 to access information may be determined at least in part by “permissions” of the particular user system 612. As discussed herein, permissions generally govern access to computing resources such as data objects, components, and other entities of a computing system, a social networking system, and/or a CRM database system. “Permission sets” generally refer to groups of permissions that may be assigned to users of such a computing environment. For instance, the assignments of users and permission sets may be stored in one or more databases of System 616. Thus, users may receive permission to access certain resources. A permission server in an on-demand database service environment can store criteria data regarding the types of users and permission sets to assign to each other. For example, a computing device can provide to the server data indicating an attribute of a user (e.g., geographic location, industry, role, level of experience, etc.) and particular permissions to be assigned to the users fitting the attributes. Permission sets meeting the criteria may be selected and assigned to the users. Moreover, permissions may appear in multiple permission sets. In this way, the users can gain access to the components of a system.
In some an on-demand database service environments, an Application Programming Interface (API) may be configured to expose a collection of permissions and their assignments to users through appropriate network-based services and architectures, for instance, using Simple Object Access Protocol (SOAP) Web Service and Representational State Transfer (REST) APIs.
In some implementations, a permission set may be presented to an administrator as a container of permissions. However each permission in such a permission set may reside in a separate API object exposed in a shared API that has a child-parent relationship with the same permission set object. This allows a given permission set to scale to millions of permissions for a user while allowing a developer to take advantage of joins across the API objects to query, insert, update, and delete any permission across the millions of possible choices. This makes the API highly scalable, reliable, and efficient for developers to use.
In some implementations, a permission set API constructed using the techniques disclosed herein can provide scalable, reliable, and efficient mechanisms for a developer to create tools that manage a user's permissions across various sets of access controls and across types of users. Administrators who use this tooling can effectively reduce their time managing a user's rights, integrate with external systems, and report on rights for auditing and troubleshooting purposes. By way of example, different users may have different capabilities with regard to accessing and modifying application and database information, depending on a user's security or permission level, also called authorization. In systems with a hierarchical role model, users at one permission level may have access to applications, data, and database information accessible by a lower permission level user, but may not have access to certain applications, database information, and data accessible by a user at a higher permission level.
As discussed above, system 616 may provide on-demand database service to user systems 612 using an MTS arrangement. By way of example, one tenant organization may be a company that employs a sales force where each salesperson uses system 616 to manage their sales process. Thus, a user in such an organization may maintain contact data, leads data, customer follow-up data, performance data, goals and progress data, etc., all applicable to that user's personal sales process (e.g., in tenant data storage 622). In this arrangement, a user may manage his or her sales efforts and cycles from a variety of devices, since relevant data and applications to interact with (e.g., access, view, modify, report, transmit, calculate, etc.) such data may be maintained and accessed by any user system 612 having network access.
When implemented in an MTS arrangement, system 616 may separate and share data between users and at the organization-level in a variety of manners. For example, for certain types of data each user's data might be separate from other users' data regardless of the organization employing such users. Other data may be organization-wide data, which is shared or accessible by several users or potentially all users form a given tenant organization. Thus, some data structures managed by system 616 may be allocated at the tenant level while other data structures might be managed at the user level. Because an MTS might support multiple tenants including possible competitors, the MTS may have security protocols that keep data, applications, and application use separate. In addition to user-specific data and tenant-specific data, system 616 may also maintain system-level data usable by multiple tenants or other data. Such system-level data may include industry reports, news, postings, and the like that are sharable between tenant organizations.
In some implementations, user systems 612 may be client systems communicating with application servers 650 to request and update system-level and tenant-level data from system 616. By way of example, user systems 612 may send one or more queries requesting data of a database maintained in tenant data storage 622 and/or system data storage 624. An application server 650 of system 616 may automatically generate one or more SQL statements (e.g., one or more SQL queries) that are designed to access the requested data. System data storage 624 may generate query plans to access the requested data from the database.
The database systems described herein may be used for a variety of database applications. By way of example, each database can generally be viewed as a collection of objects, such as a set of logical tables, containing data fitted into predefined categories. A “table” is one representation of a data object, and may be used herein to simplify the conceptual description of objects and custom objects according to some implementations. It should be understood that “table” and “object” may be used interchangeably herein. Each table generally contains one or more data categories logically arranged as columns or fields in a viewable schema. Each row or record of a table contains an instance of data for each category defined by the fields. For example, a CRM database may include a table that describes a customer with fields for basic contact information such as name, address, phone number, fax number, etc. Another table might describe a purchase order, including fields for information such as customer, product, sale price, date, etc. In some multi-tenant database systems, standard entity tables might be provided for use by all tenants. For CRM database applications, such standard entities might include tables for case, account, contact, lead, and opportunity data objects, each containing pre-defined fields. It should be understood that the word “entity” may also be used interchangeably herein with “object” and “table”.
In some implementations, tenants may be allowed to create and store custom objects, or they may be allowed to customize standard entities or objects, for example by creating custom fields for standard objects, including custom index fields. Commonly assigned U.S. Pat. No. 7,779,039, titled CUSTOM ENTITIES AND FIELDS IN A MULTI-TENANT DATABASE SYSTEM, by Weissman et al., issued on Aug. 17, 2010, and hereby incorporated by reference in its entirety and for all purposes, teaches systems and methods for creating custom objects as well as customizing standard objects in an MTS. In certain implementations, for example, all custom entity data rows may be stored in a single multi-tenant physical table, which may contain multiple logical tables per organization. It may be transparent to customers that their multiple “tables” are in fact stored in one large table or that their data may be stored in the same table as the data of other customers.
Accessing an on-demand database service environment may involve communications transmitted among a variety of different components. The environment 700 is a simplified representation of an actual on-demand database service environment. For example, some implementations of an on-demand database service environment may include anywhere from one to many devices of each type. Additionally, an on-demand database service environment need not include each device shown, or may include additional devices not shown, in
The cloud 704 refers to any suitable data network or combination of data networks, which may include the Internet. Client machines located in the cloud 704 may communicate with the on-demand database service environment 700 to access services provided by the on-demand database service environment 700. By way of example, client machines may access the on-demand database service environment 700 to retrieve, store, edit, and/or process constrained resource allocation information.
In some implementations, the edge routers 708 and 712 route packets between the cloud 704 and other components of the on-demand database service environment 700. The edge routers 708 and 712 may employ the Border Gateway Protocol (BGP). The edge routers 708 and 712 may maintain a table of IP networks or ‘prefixes’, which designate network reachability among autonomous systems on the internet.
In one or more implementations, the firewall 716 may protect the inner components of the environment 700 from internet traffic. The firewall 716 may block, permit, or deny access to the inner components of the on-demand database service environment 700 based upon a set of rules and/or other criteria. The firewall 716 may act as one or more of a packet filter, an application gateway, a stateful filter, a proxy server, or any other type of firewall.
In some implementations, the core switches 720 and 724 may be high-capacity switches that transfer packets within the environment 700. The core switches 720 and 724 may be configured as network bridges that quickly route data between different components within the on-demand database service environment. The use of two or more core switches 720 and 724 may provide redundancy and/or reduced latency.
In some implementations, communication between the pods 740 and 744 may be conducted via the pod switches 732 and 736. The pod switches 732 and 736 may facilitate communication between the pods 740 and 744 and client machines, for example via core switches 720 and 724. Also or alternatively, the pod switches 732 and 736 may facilitate communication between the pods 740 and 744 and the database storage 756. The load balancer 728 may distribute workload between the pods, which may assist in improving the use of resources, increasing throughput, reducing response times, and/or reducing overhead. The load balancer 728 may include multilayer switches to analyze and forward traffic.
In some implementations, access to the database storage 756 may be guarded by a database firewall 748, which may act as a computer application firewall operating at the database application layer of a protocol stack. The database firewall 748 may protect the database storage 756 from application attacks such as structure query language (SQL) injection, database rootkits, and unauthorized information disclosure. The database firewall 748 may include a host using one or more forms of reverse proxy services to proxy traffic before passing it to a gateway router and/or may inspect the contents of database traffic and block certain content or database requests. The database firewall 748 may work on the SQL application level atop the TCP/IP stack, managing applications' connection to the database or SQL management interfaces as well as intercepting and enforcing packets traveling to or from a database network or application interface.
In some implementations, the database storage 756 may be an on-demand database system shared by many different organizations. The on-demand database service may employ a single-tenant approach, a multi-tenant approach, a virtualized approach, or any other type of database approach. Communication with the database storage 756 may be conducted via the database switch 752. The database storage 756 may include various software components for handling database queries. Accordingly, the database switch 752 may direct database queries transmitted by other components of the environment (e.g., the pods 740 and 744) to the correct components within the database storage 756.
In some implementations, the app servers 788 may include a framework dedicated to the execution of procedures (e.g., programs, routines, scripts) for supporting the construction of applications provided by the on-demand database service environment 700 via the pod 744. One or more instances of the app server 788 may be configured to execute all or a portion of the operations of the services described herein.
In some implementations, as discussed above, the pod 744 may include one or more database instances 790. A database instance 790 may be configured as an MIS in which different organizations share access to the same database, using the techniques described above. Database information may be transmitted to the indexer 794, which may provide an index of information available in the database 790 to file servers 786. The QFS 792 or other suitable filesystem may serve as a rapid-access file system for storing and accessing information available within the pod 744. The QFS 792 may support volume management capabilities, allowing many disks to be grouped together into a file system. The QFS 792 may communicate with the database instances 790, content search servers 768 and/or indexers 794 to identify, retrieve, move, and/or update data stored in the network file systems (NFS) 796 and/or other storage systems.
In some implementations, one or more query servers 782 may communicate with the NFS 796 to retrieve and/or update information stored outside of the pod 744. The NFS 796 may allow servers located in the pod 744 to access information over a network in a manner similar to how local storage is accessed. Queries from the query servers 722 may be transmitted to the NFS 796 via the load balancer 728, which may distribute resource requests over various resources available in the on-demand database service environment 700. The NFS 796 may also communicate with the QFS 792 to update the information stored on the NFS 796 and/or to provide information to the QFS 792 for use by servers located within the pod 744.
In some implementations, the content batch servers 764 may handle requests internal to the pod 744. These requests may be long-running and/or not tied to a particular customer, such as requests related to log mining, cleanup work, and maintenance tasks. The content search servers 768 may provide query and indexer functions such as functions allowing users to search through content stored in the on-demand database service environment 700. The file servers 786 may manage requests for information stored in the file storage 798, which may store information such as documents, images, basic large objects (BLOBs), etc. The query servers 782 may be used to retrieve information from one or more file systems. For example, the query system 782 may receive requests for information from the app servers 788 and then transmit information queries to the NFS 796 located outside the pod 744, The ACS servers 780 may control access to data, hardware resources, or software resources called upon to render services provided by the pod 744. The batch servers 784 may process batch jobs which are used to run tasks at specified times. Thus, the batch servers 784 may transmit instructions to other servers such as the app servers 788, to trigger the batch jobs.
While some of the disclosed implementations may be described with reference to a system having an application server providing a front end for an on-demand database service capable of supporting multiple tenants, the disclosed implementations are not limited to multi-tenant databases nor deployment on application servers. Some implementations may be practiced using various database architectures such as ORACLE®, DB2® by IBM and the like without departing from the scope of present disclosure.
Any of the disclosed implementations may be embodied in various types of hardware, software, firmware, computer readable media, and combinations thereof. For example, some techniques disclosed herein may be implemented, at least in part, by computer-readable media that include program instructions, state information, etc., for configuring a computing system to perform various services and operations described herein. Examples of program instructions include both machine code, such as produced by a compiler, and higher-level code that may be executed via an interpreter. Instructions may be embodied in any suitable language such as, for example, Apex, Java, Python, C++, C, HTML, any other markup language, JavaScript, ActiveX, VBScript, or Perl. Examples of computer-readable media include, but are not limited to: magnetic media such as hard disks and magnetic tape; optical media such as flash memory, compact disk (CD) or digital versatile disk (DVD); magneto-optical media; and other hardware devices such as read-only memory (“ROM”) devices and random-access memory (“RAM”) devices. A computer-readable medium may be any combination of such storage devices.
In the foregoing specification, various techniques and mechanisms may have been described in singular form for clarity. However, it should be noted that some embodiments include multiple iterations of a technique or multiple instantiations of a mechanism unless otherwise noted. For example, a system uses a processor in a variety of contexts but can use multiple processors while remaining within the scope of the present disclosure unless otherwise noted. Similarly, various techniques and mechanisms may have been described as including a connection between two entities. However, a connection does not necessarily mean a direct, unimpeded connection, as a variety of other entities (e.g., bridges, controllers, gateways, etc.) may reside between the two entities.
In the foregoing specification, reference was made in detail to specific embodiments including one or more of the best modes contemplated by the inventors. While various implementations have been described herein, it should be understood that they have been presented by way of example only, and not limitation. For example, some techniques and mechanisms are described herein in the context of on-demand computing environments that include MTSs. However, the techniques of disclosed herein apply to a wide variety of computing environments. Particular embodiments may be implemented without some or all of the specific details described herein. In other instances, well known process operations have not been described in detail in order to avoid unnecessarily obscuring the disclosed techniques. Accordingly, the breadth and scope of the present application should not be limited by any of the implementations described herein, but should be defined only in accordance with the claims and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
7937491 | Ng | May 2011 | B1 |
9268605 | Wang et al. | Feb 2016 | B2 |
9348648 | Wang et al. | May 2016 | B2 |
9529626 | Wang et al. | Dec 2016 | B2 |
9632852 | Kwong et al. | Apr 2017 | B2 |
9766960 | Wang | Sep 2017 | B2 |
9813516 | Wang | Nov 2017 | B2 |
9858187 | Sundaravaradan et al. | Jan 2018 | B2 |
9984002 | Sundaravaradan et al. | May 2018 | B2 |
9990400 | Sundaravaradan et al. | Jun 2018 | B2 |
10013294 | Kwong et al. | Jul 2018 | B2 |
10013501 | Sundaravaradan et al. | Jul 2018 | B2 |
10140153 | Wang | Nov 2018 | B2 |
10169090 | Wang | Jan 2019 | B2 |
10637949 | Wang | Apr 2020 | B2 |
10664266 | Peschansky et al. | May 2020 | B2 |
10693709 | Chainani et al. | Jun 2020 | B2 |
10768983 | Wang et al. | Sep 2020 | B2 |
10776147 | Ovesea et al. | Sep 2020 | B2 |
10776373 | Wang | Sep 2020 | B2 |
10817497 | Zaslavsky et al. | Oct 2020 | B2 |
20030236822 | Graupner | Dec 2003 | A1 |
20050063411 | Wang | Mar 2005 | A1 |
20050229219 | Rosabella | Oct 2005 | A1 |
20080082555 | Salmon | Apr 2008 | A1 |
20100017514 | Wagh | Jan 2010 | A1 |
20110145902 | Kim | Jun 2011 | A1 |
20120311022 | Watanabe | Dec 2012 | A1 |
20140018037 | Shanmugavadivel | Jan 2014 | A1 |
20140075017 | Wang et al. | Mar 2014 | A1 |
20150046279 | Wang | Feb 2015 | A1 |
20160119244 | Wang et al. | Apr 2016 | A1 |
20160119246 | Wang | Apr 2016 | A1 |
20170111187 | Zanier | Apr 2017 | A1 |
20170116566 | Walton | Apr 2017 | A1 |
20180063271 | Wang | Mar 2018 | A1 |
20180349931 | Chen | Dec 2018 | A1 |
20190095249 | Wang | Mar 2019 | A1 |
20190235895 | Ovesea et al. | Aug 2019 | A1 |
20190235918 | Liu et al. | Aug 2019 | A1 |
20190236150 | Zaslavsky et al. | Aug 2019 | A1 |
20190236201 | Wang et al. | Aug 2019 | A1 |
20190268283 | Mukherjee | Aug 2019 | A1 |
20190306008 | Chainani et al. | Oct 2019 | A1 |
20200249932 | Peschansky et al. | Aug 2020 | A1 |
Number | Date | Country |
---|---|---|
2883883 | Sep 2019 | CA |