An enterprise may utilize applications or services executing in a cloud computing environment. For example, a business might utilize applications that execute at a data center to process purchase orders, human resources tasks, payroll functions, etc. Implementing an application as a service in a cloud computing environment may simplify the design of an application and improve the utilization of resources for that application. In some cases, a Relational Database Management System (“RDBMS”) as a service might be provided in such a cloud computing environment. It can be difficult, however, to ensure that such a system can meet high availability standards and implement a single endpoint solution using traditional approaches.
It would therefore be desirable to provide a single endpoint solution for a service in a cloud-based computing environment in a secure, automatic, and accurate manner.
According to some embodiments, a first forwarding VM may execute in a first availability zone and have a first IP address. Similarly, a second forwarding VM may execute in a second availability zone and have a second IP address. The first and second IP addresses may be recorded with a cloud DNS web service of a cloud provider such that both receive requests from applications directed to a particular DNS name acting as a single endpoint. A service cluster may include a master VM node and a standby VM node. An IPtable in each forwarding VM may forward a request having a port value to a cluster port value associated with the master VM node. Upon a failure of the master VM node, the current standby VM node will be promoted to execute in master mode and the IPtables may be updated to now forward requests having the port value to a cluster port value associated with the newly promoted master VM node (which was previously the standby VM node).
Some embodiments comprise: means for executing a first forwarding VM in a first availability zone and having a first IP address; means for executing a second forwarding VM in a second availability zone and having a second IP address; means for recording the first and second IP addresses with a cloud DNS web service of a cloud provider such that both receive requests from applications directed to a particular DNS name acting as a single endpoint; means for providing a RDBMS as a service cluster, including a master RDBMS VM node and a standby RDBMS VM node; means for forwarding, by an IPtable in each forwarding VM, a request having a port value to a cluster port value associated with the master RDBMS VM node; means for detecting a failure of the master RDBMS VM node; and responsive to said detecting, means for promoting the second RDMS VM node (standby) to execute in the master mode and means for updating the IPtables to forward requests having the port value to a cluster port value associated with the second RDBMS VM node (the newly promoted master RDBMS VM node).
Yet other embodiments comprise: means for executing a first RDBMS VM node and a first controller node in a first availability zone; means for executing a second RDBMS VM node and a second controller in a second availability zone; means for executing a third controller node in a third availability zone; means for obtaining a first SPIP for the master RDBMS VM node; means for obtaining a second SPIP for the standby RDBMS VM node; means for recording the first and second SPIP addresses with a cloud DNS web service of a cloud provider; means for creating a rule in an IPtable of the second RDBMS VM node such that an OS rejects requests; means for detecting a failure of the master RDBMS VM node; and responsive to said detecting, means for promoting the second RDBMS VM node (standby) to execute in master mode and means for: (i) floating the first SPIP from the master RDBMS VM node to the controller node in the first availability zone, and (ii) deleting the rule in the IPtable of the second RDBMS VM node (which is the newly promoted master RDBMS node).
Some technical advantages of some embodiments disclosed herein are improved systems and methods to provide a single endpoint solution for a service in a cloud-based computing environment in a secure, automatic, and accurate manner.
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of embodiments. However, it will be understood by those of ordinary skill in the art that the embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the embodiments.
One or more specific embodiments of the present invention will be described below. In an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
The adoption of cloud applications has substantially increased in recent years. With such an approach, developers do not need to worry about infrastructure or runtimes that are required for an application. Cloud adoption may also result in reduced capital expenditures and maintenance costs while also providing more flexibility and scalability. This in turn increases the importance of Platforms-as-a-Service (“PaaS”) products in this space.
PaaS products provide a platform that lets customers develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an application. It can instead be viewed as a product, where the consumer can control software deployment with minimal configuration options, and the provider provides the networks, servers, storage, Operating System (“OS”), middleware (e.g., Java runtime, .NET runtime, integration, etc.), databases and other services to host the consumer's application.
A cloud platform may comprise an open PaaS product that provides core services to build and extend cloud applications on multiple cloud Infrastructure-as-a-Service (“IaaS”) providers, such as AMAZON® WEB SERVICES (“AWS”), OpenStack, AZURE® from MICROSOFT®, and the GOOGLE® CLOUD PLATFORM (“GCP”).
PostgreSQL is a powerful, open source object-relational database system that uses and extends the SQL language combined with many features that safely store and scale complicated data workloads. PostgreSQL as a service may be provisioned on a cloud platform at a large scale, for example, thousands of PostgreSQL as a service instances might be provisioned and managed. PostgreSQL as a service instances and/or Virtual Machines (“VMs”) may be created and provisioned by BOSH. BOSH is an open source project that offers a tool chain to release engineering, deployment, and life-cycle management tools for large scale distributed services. Note that VM custom modules may be deployed to provide various cloud qualities around the PostgreSQL. One quality and/or feature associated with VMs is having a single endpoint for high availability PostgreSQL as a service on a cloud platform such as AWS.
In some embodiments, each PostgreSQL as a service instance or cluster may include five VMs (two PostgreSQL VMs and three PGPOOL or controller VMs. For example,
According to some embodiments, one PostgreSQL VM 212 runs in “master” mode and may be responsible for serving all the read and write requests made by applications connected to the cluster. A second PostgreSQL VM 214 runs in “standby” mode and replicates the data from the master 212 either in a synchronous or asynchronous way depending on the configuration. At any point in time, the PostgreSQL standby node 214 may act as the fallback/failover node in case of any failure in master node 212. In some embodiments, the PostgreSQL standby node 214 may also be responsible for serving read requests from the applications connected to the cluster (e.g., so that the request load on the master node 212 can be alleviated).
The three PGPOOL nodes 222, 224, 226 may comprises controller nodes responsible for managing master and slave nodes 212, 214 in the cluster. A system might make use of software like PGPOOL to achieve a subset of features or the system 200 could be based on custom modules. The controller VMs 222, 224, 226 might be responsible for some or all of the following:
In one embodiment, a separate set of three controller nodes 222, 224, 226 are associated with each cluster and is responsible for some or all of the above features. This is done to make sure that any controller node failure does not impact any other cluster. In another embodiment, a pool of controller nodes 222, 224, 226 are created upfront and a random set of three controller nodes are associated with each cluster. These sets of controller nodes might be shared among multiple clusters to reduce the operational costs of VMs.
A cloud platform might manage more than 10,000 PostgreSQL-as-a-Service instances across multiple IaaS entities. An important cloud quality is High Availability (“HA”). Any service with HA qualities may be strictly bound by Service Level Agreements (“SLAs”) which can provide stringent guarantees to application users about a service's availability throughout a life span (e.g., PostgreSQL might need to be available 99.99% of the time, no matter what kind of hardware or software disruptions arise). It may therefore be important that any service disruption resulting from a failure in the system be reduced to a minimal time period.
When an application needs to connect to a service, it must determine the information necessary to connect and communicate with the server. This information is known as a “service end-point.” The service end-point for a PostgreSQL as a service may comprise: a protocol name, a private Internet Protocol (“IP”) address or equivalent Fully Qualified Domain Name (“FQDN”), and a port in which the service has been made available. When a new service instance is created, a unique end-point may be assigned to it which is returned to applications that want to connect to that service. In the case of PostgreSQL as a service, more than one PostgreSQL VM is part of a service instance (cluster). At any point in time (since there is only one master node), the end-point generated should point to the “current” master (or primary) node in that cluster.
One important quality of a cloud platform is high availability. In a single availability zone architecture, an entire zone may fail or go down. In that case, availability will also be down because the zone is not up and running. As a result, a multi-availability zone architecture may be important. For example, on AWS each Availability Zone (“AZ”) runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. According to some embodiments, each of the two PostgreSQL nodes in a cluster may be placed in a different availability zone. In the case of an AZ failure, the system can therefore have a standby node in another zone to handle failover for the cluster.
Some embodiments described herein utilize a Secondary Private Internet Protocol (“SPIP”) address in a single AZ architecture. This approach might not be applicable in a multiple AZ architecture because SPIP can be floated from one node to another node only if both nodes are present in the same subnet as the SPIP. In other words, the SPIP and IPaddresses of both PostgreSQL nodes should belong to same subnet. Also, in AWS a subnet cannot be spanned across AZs (that is, no two availability zones can share the subnets). As a result, SPIP cannot be applied in a multiple AZ architecture. Instead, some embodiments described herein provide a novel, dynamic, and distributed approach to provide a single endpoint for multiple AZ PostgreSQL as a service in a cloud platform (e.g., AWS).
To provide a single endpoint solution for a RDBMS as a service in a cloud-based computing environment in a secure, automatic, and accurate manner,
The system 500 may store information into and/or retrieve information from various data stores, which may be locally stored or reside remote from the RDBMS as a service cluster. Although a single RDBMS as a service cluster is described in some embodiments herein, any number of such devices may be included. Moreover, various devices described herein might be combined according to embodiments of the present invention. The system 500 functions may be performed by a constellation of networked apparatuses, such as in a distributed processing or cloud-based architecture.
A user may access the system 500 via a remote device (e.g., a Personal Computer (“PC”), tablet, or smartphone) to view information about and/or manage operational information in accordance with any of the embodiments described herein. In some cases, an interactive graphical user interface display may let an operator or administrator define and/or adjust certain parameters (e.g., to implement various rules and policies) and/or provide or receive automatically generated recommendations or results from the system 500.
An AWS Route 53 record set may be created and the two VM IP addresses may be registered with this record set. Note that Route 53 is a highly available and scalable cloud Domain Name System (“DNS”) web service that gives developers a reliable and cost-effective way to route end users to Internet applications by translating a name such as “Test.psql.cloud.io” into a numeric IP address.
As shown in the system 700 of
On forwarding, VMs 812, 814 IPtable rules (e.g., a Linux feature to block, allow, and/or forward network traffic packets) will be added such that any request coming on port “x” will be forwarded to port “5432” (the port where the PostgreSQL process runs) of the Master IP 842 (which is primary PostgreSQL node). In the system 900 of
Now, consider a failover such as system 1000 of
Moreover, this solution may be extended for any number of zones. For example, corresponding forwarding VMs may be added in each zone. Consider, however, the situation when one of the forwarding VMs goes down (e.g., fails or is deleted) and Route 53 forwards the request to that specific forwarding VM. This is a valid failing scenario, because if the forwarding VM goes down then there is no VM where the request can be sent. Note that Route 53 has a feature to set a health check on its registered VMs. So, while registering forwarding VMs with Route 53, Route 53 health checks may be set with forwarding VMs. According to some embodiments, the health check is set on a standard port (e.g., port 22) which will indicate whether the VM is up or down. The system 1400 of
At S1810, the system may execute a first forwarding VM in a first availability zone and having a first Internet Protocol IP address. At S1820, the system may execute a second forwarding VM in a second availability zone and having a second IP address. At S1830, the system may record the first and second IP addresses with a cloud DNS web service of a cloud provider such that both receive requests from applications directed to a particular DNS name acting as a single endpoint. At S1840, the system may provide a RDBMS as a service cluster, including a master RDBMS VM node and a standby RDBMS VM node. At S1850, the system may forward, by an IPtable in each forwarding VM, a request having a port value to a cluster port value associated with the master RDBMS VM node. At S1860, the system may detect a failure of the master RDBMS VM node. Responsive to this detection, the system may update the IPtables to forward requests having the port value to a cluster port value associated with the standby RDBMS VM node at S1870.
Thus, embodiments described herein may provide advantages such as being easily scaled for many PostgreSQL clusters. Moreover, failover downtime may be relatively small (e.g., 2 or 3 seconds) which may be almost negligible. Some embodiments may be easy extended for any number of availability zones and cover all sorts of failure cases. In addition, embodiments may be applied on any type of service, application, or cluster. Examples have been provided from a PostgreSQL point of view, but might be extended to any other type of service.
Some embodiments described herein may provide a novel method for a single endpoint (e.g., in AWS) in a multiple AZ environment. For example, a FQDN may be used as a single endpoint and be part of the service endpoint information used to connect and communicate with a PostgreSQL server. Thus, the FQDN may resolve to the “current” master in a Highly Available (“HA”) manner at any point in time. In a multiple AZ architecture, each PostgreSQL node may be placed in a different AZ. Also, the controller nodes may be placed in different AZs (e.g., three different AZs). With such an architecture for a particular cluster, one PostgreSQL node and one PGPOOL/controller node may be placed in a single availability zone as illustrated by the system 1900 of
On both PostgreSQL nodes 2012, 2014 a Secondary Private IP (“SPIP”) address will be added as illustrated by the system 2000 of
To add/associate the IP address, the system may use:
This command tells OS to add IP address on interface eth0. Note that an SPIP address can be floated to another VM within the same AZ. Thus, if two PostgreSQL nodes are in a single AZ, Ip2 can float between those two nodes. Note that the system 2000 of
As illustrated by the system 2100 of
When a client or customer is trying to connect to the Route 53 record set on port “x,” it will forward the request to any of the registered nodes on same port “x.” If there is no process running on port “x” on then requested node, then the OS running on that node will reject the request with a “TCP-reset” packet. If the Route 53 record set receives A “TCP-reset” packet as a response from a requested node, then it will try another node registered with the Route 53 record set. According to some embodiments, this property may be used to solve a single endpoint problem in an AWS multiple AZ architecture. On the standby PostgreSQL node, an IPtable rule is added to reject any packet that arrives with a destination of the SPIP assigned to the standby PostgreSQL node and a port of the PostgreSQL process (e.g., port 5432) with a “TCP-reset” packet as illustrated by the system 2300 of
Note that on both of the PostgreSQL nodes there is process with port 5432 running. To reject the packet, an IPtable rule needs to be added on the standby node. If there is no process with a given port is running on the node. then the OS will itself reject the packet.
Now suppose that the primary PostgreSQL node goes down as illustrated by the system 2400 of
Now consider the situation where a standby node is abruptly fails or if any of the SPIP is not available. In these cases, the Route 53 record set will not receive a “TCP-reset” packet. If Route 53 does not receive a “TCP-reset” packet and still cannot connect, then it retries the connection for a predetermined period of time (or predetermined number of times). This will be repeated for all new connection requests. As a result, frequent downtime might occur. Note that Route 53 has a feature to set a health check on its registered VMs. While registering PostgreSQL VMs with Route 53, the Route 53 health check will be set with PostgreSQL VMs. The health check can be set on standard port (e.g., 22) or any custom health check endpoint can be also used (which will indicate whether VM is up or down). If the health check corresponding to VM passes, then Route 53 will forward the request to that VM as illustrated by the system 2700 of
Note that the embodiments described herein may be implemented using any number of different hardware configurations. For example,
The processor 3110 also communicates with a storage device 3130. The storage device 3130 can be implemented as a single database or the different components of the storage device 3130 can be distributed using multiple databases (that is, different deployment information storage options are possible). The storage device 3130 may comprise any appropriate information storage device, including combinations of magnetic storage devices (e.g., a hard disk drive), optical storage devices, mobile telephones, and/or semiconductor memory devices. The storage device 3130 stores a program 3112 and/or RDBMS platform 3114 for controlling the processor 3110. The processor 3110 performs instructions of the programs 3112, 3114, and thereby operates in accordance with any of the embodiments described herein. For example, the processor 3110 may execute a first forwarding VM in a first availability zone and having a first IP address and a second forwarding VM in a second availability zone and having a second IP address. The processor 3110 may also record the first and second IP addresses with a cloud DNS web service of a cloud provider such that both receive requests from applications directed to a particular DNS name acting as a single endpoint. The processor 3110 may then provide a RDBMS as a service cluster, including a master RDBMS VM node and a standby RDBMS VM node. An IPtable in each forwarding VM may forward a request having a port value to a cluster port value associated with the master RDBMS VM node. When the processor 3110 detects a failure of the master RDBMS VM node, it may update the IPtables to forward requests having the port value to a cluster port value associated with the standby RDBMS VM node.
In other embodiments, the processor 3110 may execute a first RDBMS VM node and first controller node in a first availability zone and a second RDBMS VM node and second controller node in a second availability zone. The processor may then obtain a first SPIP for the master RDBMS VM node and a second SPIP for the standby RDBMS VM node. The first and second SPIP addresses may be recorded with a cloud DNS web service of a cloud provider and a rule may be created in an IPtable of the second RDBMS VM node such that an OS rejects requests. When the processor 3110 detects a failure of the master RDBMS VM node, it may: (i) float the first SPIP from the master RDBMS VM node to the controller node in the first availability zone, and (ii) delete the rule in the IPtable of the second RDBMS VM node.
The programs 3112, 3114 may be stored in a compressed, uncompiled and/or encrypted format. The programs 3112, 3114 may furthermore include other program elements, such as an operating system, clipboard application, a database management system, and/or device drivers used by the processor 3110 to interface with peripheral devices.
As used herein, information may be “received” by or “transmitted” to, for example: (i) the platform 3100 from another device; or (ii) a software application or module within the platform 3100 from another software application, module, or any other source.
In some embodiments (such as the one shown in
Referring to
The RDBMS identifier 3202 might be a unique alphanumeric label or link that is associated with PostgreSQL as a service or similar service being defined for an application. The VM identifier 3204 might identify a machine to be associated with a cluster for that service (e.g., a controller node VM). The VM description 3206 might indicate if the VM is associated with a master node, standby node, controller node, etc. The IP address 3208 may be private IP address used to identify that particular VM within the cluster.
The following illustrates various additional embodiments of the invention. These do not constitute a definition of all possible embodiments, and those skilled in the art will understand that the present invention is applicable to many other embodiments. Further, although the following embodiments are briefly described for clarity, those skilled in the art will understand how to make any changes, if necessary, to the above-described apparatus and methods to accommodate these and other embodiments and applications.
Although specific hardware and data configurations have been described herein, note that any number of other configurations may be provided in accordance with some embodiments of the present invention (e.g., some of the information associated with the databases described herein may be combined or stored in external systems). Moreover, although some embodiments are focused on particular types of applications and services, any of the embodiments described herein could be applied to other types of applications and services. In addition, the displays shown herein are provided only as examples, and any other type of user interface could be implemented. For example,
The present invention has been described in terms of several embodiments solely for the purpose of illustration. Persons skilled in the art will recognize from this description that the invention is not limited to the embodiments described, but may be practiced with modifications and alterations limited only by the spirit and scope of the appended claims.
The present application is a non-provisional divisional claiming priority to non-provisional application Ser. No. 16/658,382, which was filed on Oct. 21, 2019, the content of which is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
8346929 | Lai | Jan 2013 | B1 |
8953439 | Lin | Feb 2015 | B1 |
9164864 | Novick | Oct 2015 | B1 |
9210067 | Agarwal | Dec 2015 | B1 |
9985875 | Srinath | May 2018 | B1 |
10210058 | Srinath | Feb 2019 | B1 |
10445509 | Thota | Oct 2019 | B2 |
10454758 | Boutros | Oct 2019 | B2 |
10491546 | Gopalakrishnan | Nov 2019 | B2 |
10999125 | Srinath | May 2021 | B1 |
11321348 | McAlister | May 2022 | B2 |
20020194015 | Gordon | Dec 2002 | A1 |
20050044197 | Lai | Feb 2005 | A1 |
20050201272 | Wang | Sep 2005 | A1 |
20080181241 | Regan | Jul 2008 | A1 |
20090059784 | Cunetto | Mar 2009 | A1 |
20100042715 | Tham | Feb 2010 | A1 |
20100100957 | Graham | Apr 2010 | A1 |
20100322255 | Hao | Dec 2010 | A1 |
20120113835 | Alon | May 2012 | A1 |
20120127855 | Alon | May 2012 | A1 |
20120215779 | Lipstone | Aug 2012 | A1 |
20130066834 | McAlister | Mar 2013 | A1 |
20130091335 | Mulcahy | Apr 2013 | A1 |
20130227558 | Du | Aug 2013 | A1 |
20130297757 | Han | Nov 2013 | A1 |
20130326609 | Sharkey | Dec 2013 | A1 |
20140019959 | Dodgson | Jan 2014 | A1 |
20140059226 | Messerli | Feb 2014 | A1 |
20140101652 | Kamble | Apr 2014 | A1 |
20140189235 | Obligacion | Jul 2014 | A1 |
20140269746 | Chinthalapati | Sep 2014 | A1 |
20140281026 | Guo | Sep 2014 | A1 |
20140337500 | Lee | Nov 2014 | A1 |
20140344424 | Sato | Nov 2014 | A1 |
20150009804 | Koponen | Jan 2015 | A1 |
20150049632 | Padmanabhan | Feb 2015 | A1 |
20150052262 | Chanda | Feb 2015 | A1 |
20150063360 | Thakkar | Mar 2015 | A1 |
20150063364 | Thakkar | Mar 2015 | A1 |
20150161226 | Lipstone | Jun 2015 | A1 |
20150347250 | Kim | Dec 2015 | A1 |
20150378763 | Hassine | Dec 2015 | A1 |
20160043929 | Hao | Feb 2016 | A1 |
20160127206 | Du | May 2016 | A1 |
20160191304 | Muller | Jun 2016 | A1 |
20160191611 | Srinivasan | Jun 2016 | A1 |
20160253400 | McAlister | Sep 2016 | A1 |
20160259692 | Ramamurthi | Sep 2016 | A1 |
20170012875 | Chang | Jan 2017 | A1 |
20170075779 | Grimaldi | Mar 2017 | A1 |
20170085430 | Friedman | Mar 2017 | A1 |
20170085488 | Bhattacharya | Mar 2017 | A1 |
20170118041 | Bhattacharya | Apr 2017 | A1 |
20170124659 | Drennan, III | May 2017 | A1 |
20170317954 | Masurekar | Nov 2017 | A1 |
20170324665 | Johnsen | Nov 2017 | A1 |
20180062933 | Hira | Mar 2018 | A1 |
20180063193 | Chandrashekhar | Mar 2018 | A1 |
20180176093 | Katz | Jun 2018 | A1 |
20180210910 | Collins | Jul 2018 | A1 |
20180359145 | Bansal | Dec 2018 | A1 |
20190013963 | Papo | Jan 2019 | A1 |
20190028569 | Keller | Jan 2019 | A1 |
20190065510 | Pradeep | Feb 2019 | A1 |
20190238498 | Cleary | Aug 2019 | A1 |
20190280964 | Michael | Sep 2019 | A1 |
20190342390 | Iancu | Nov 2019 | A1 |
20190349127 | K | Nov 2019 | A1 |
20210051077 | Dempo | Feb 2021 | A1 |
20220103518 | LaChance | Mar 2022 | A1 |
Number | Date | Country |
---|---|---|
2013249152 | Sep 2020 | AU |
101410819 | Apr 2009 | CN |
108228581 | Jun 2018 | CN |
109451084 | Mar 2019 | CN |
WO-2005104650 | Nov 2005 | WO |
WO-2014067330 | May 2014 | WO |
WO-2014067335 | May 2014 | WO |
2016073625 | May 2016 | WO |
2020057 411 | Mar 2020 | WO |
Entry |
---|
892 Form dated Apr. 14, 2022 which was received in connection with U.S. Appl. No. 16/658,382. |
892 Form dated Sep. 2, 2022 which was received in connection with U.S. Appl. No. 16/658,382. |
892 Form dated Nov. 3, 2022 which was received in connection with U.S. Appl. No. 16/658,382. |
Notice of Allowance dated Mar. 3, 2023 which was received in connection with U.S. Appl. No. 16/658,382. |
892 Form dated Mar. 3, 2023 which was received in connection with U.S. Appl. No. 16/658,382. |
Number | Date | Country | |
---|---|---|---|
20230318991 A1 | Oct 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16658382 | Oct 2019 | US |
Child | 18205873 | US |