Entities, such as software developers and/or vendors, provide software and services. Example software can include enterprise software. In some examples, enterprise software can include application software (an application) that interacts with one or more databases. For example, an application can be hosted on one or more application servers and a user can interact with the application using a client device. In some examples, user interaction can result in data being read from, written to, and/or modified within one or more databases provided in one or more database systems.
During a lifecycle of the application and/or database, one or more maintenance operations may be required. Example maintenance operations include upgrading, patching and testing. In order to perform such maintenance procedures, the application, and/or database may be taken offline, such that users are unable to interact with the application and/or database. This is referred to as downtime. Although software providers have strived minimize downtime, achieving zero downtime during such maintenance procedures has been an elusive goal. Further, some maintenance procedures have required, for example, copying of data to separate databases, which can require additional resources (e.g., computer processing, memory).
Implementations of the present disclosure are directed to minimizing downtime during upgrade of an application. More particularly, implementations of the present disclosure are directed to minimizing downtime during upgrade of an application in systems with database-side replication.
In some implementations, actions include providing, by a deploy tool, one or more source-side clone data components in the first database system, each source-side clone data component being a copy of a respective data component, defining, by the deploy tool, a source-side green access schema in the first database system, the source-side green access schema providing one or more views to the one or more source-side clone data components, providing, by a replication system and based on one or more statements received from the deploy tool, one or more consumer-side clone data components in the first database system, each consumer-side clone data component being a copy of a respective data component, defining, by a replication system and based on one or more statements received from the deploy tool, a consumer-side green access schema in the first database system, the consumer-side green access schema providing one or more views to the one or more source-side clone data components, and, during execution of the upgrade, replicating, by a handler of the replication system, data from at least one source-side data component to a respective consumer-side component. Other implementations of this aspect include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices.
These and other implementations can each optionally include one or more of the following features: actions further include, after completion of the upgrade, switching an upgraded version of the application to the source-side green access schema for interacting with the one or more source-side clone data components, and removing a source-side blue access schema, through which a previous version of the application accessed data in the first database system; actions further include, after completion of the upgrade, switching replication of data from the first database system to the consumer-side green access schema, and removing a consumer-side blue access schema, through which data was replicated to in the second database system; the replication system triggers switching of replication of data, and removing the consumer-side blue access schema based on post-upgrade statements received from the deploy tool; the source-side green access schema includes a replication trigger for replicating data to the second database system; a data schema of the first database system includes a trigger for copying data from an original clone data component to at least one source-side data component; and the one or more source-side data components include one or more of a clone column, and a clone table.
The present disclosure also provides a computer-readable storage medium coupled to one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations in accordance with implementations of the methods provided herein.
The present disclosure further provides a system for implementing the methods provided herein. The system includes one or more processors, and a computer-readable storage medium coupled to the one or more processors having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations in accordance with implementations of the methods provided herein.
It is appreciated that methods in accordance with the present disclosure can include any combination of the aspects and features described herein. That is, methods in accordance with the present disclosure are not limited to the combinations of aspects and features specifically described herein, but also include any combination of the aspects and features provided.
The details of one or more implementations of the present disclosure are set forth in the accompanying drawings and the description below. Other features and advantages of the present disclosure will be apparent from the description and drawings, and from the claims.
Like reference symbols in the various drawings indicate like elements.
Implementations of the present disclosure are directed to minimizing downtime during upgrade of an application. More particularly, implementations of the present disclosure are directed to minimizing downtime during upgrade of an application in systems with database-side replication. Implementations can include actions of providing, by a deploy tool, one or more source-side clone data components in the first database system, each source-side clone data component being a copy of a respective data component, defining, by the deploy tool, a source-side green access schema in the first database system, the source-side green access schema providing one or more views to the one or more source-side clone data components, providing, by a replication system and based on one or more statements received from the deploy tool, one or more consumer-side clone data components in the first database system, each consumer-side clone data component being a copy of a respective data component, defining, by a replication system and based on one or more statements received from the deploy tool, a consumer-side green access schema in the first database system, the consumer-side green access schema providing one or more views to the one or more source-side clone data components, and, during execution of the upgrade, replicating, by a handler of the replication system, data from at least one source-side data component to a respective consumer-side component.
In some examples, the client devices 102 can communicate with one or more of the server devices 108 over the network 106. In some examples, the client device 102 can include any appropriate type of computing device such as a desktop computer, a laptop computer, a handheld computer, a tablet computer, a personal digital assistant (PDA), a cellular telephone, a network appliance, a camera, a smart phone, an enhanced general packet radio service (EGPRS) mobile phone, a media player, a navigation device, an email device, a game console, or an appropriate combination of any two or more of these devices or other data processing devices.
In some implementations, the network 106 can include a large computer network, such as a local area network (LAN), a wide area network (WAN), the Internet, a cellular network, a telephone network (e.g., PSTN) or an appropriate combination thereof connecting any number of communication devices, mobile computing devices, fixed computing devices and server systems.
In some implementations, each server device 108 includes at least one server and at least one data store. In the example of
In some implementations, one or more data stores of the server system 104 store one or more databases. In some examples, a database can be provided as an in-memory database. In some examples, an in-memory database is a database management system that uses main memory for data storage. In some examples, main memory includes random access memory (RAM) that communicates with one or more processors, e.g., central processing units (CPUs), over a memory bus. An-memory database can be contrasted with database management systems that employ a disk storage mechanism. In some examples, in-memory databases are faster than disk storage databases, because internal optimization algorithms can be simpler and execute fewer CPU instructions, e.g., require reduced CPU consumption. In some examples, accessing data in an in-memory database eliminates seek time when querying the data, which provides faster and more predictable performance than disk-storage databases.
Implementations of the present disclosure are described in further detail herein with reference to an example context. The example context includes business applications that are executed on a client-server architecture, such as the example architecture 100 of
Referring again to
In some implementations, applications and/or databases undergo lifecycle management. In some examples, lifecycle management includes executing one or more maintenance procedures for an application and/or a database. Example maintenance procedures can include an upgrade procedure, a patch procedure, a customer configuration procedure, and development and testing procedures. An example upgrade procedure can include updating software. For example, an application can be updated from a first version, e.g., V1, to a second version, e.g., V2. An example update can include adding functionality to the application. As another example, a database can be updated from a first version, e.g., V1, to a second version, e.g., V2. An example update can be updating a data schema of the database. In some examples, a data schema (also referred to as database schema) is a data structure that defines how data is to be stored in the database. In some examples, the database schema can be defined in a formal language that is supported by a database management system (DBMS), and can include a set of formulas (also referred to as constraints) imposed on the database. In general, a data schema can be described as a catalog that specifies all database objects that can be stored in the database. In some examples, different data schemas, e.g., V1 versus V2, can have different objects with the same object name, but different structures.
As introduced above, the execution of maintenance procedures traditionally results in downtime (e.g., unavailability) of an application, and/or database. Implementations of the present disclosure enable zero downtime of the application, and/or database during maintenance procedures. That is, implementations of the present disclosure provide continuous availability of an application and/or data during one or more maintenance procedures. Implementations of the present disclosure are particularly directed to scenarios, in which an application is undergoing maintenance (e.g., upgrade to a new version (ideally without downtime), and, simultaneously, the database accessed by the application is partially replicated to another database (e.g., not a full database replication). For example, S/4HANA application, provided by SAP SE of Walldorf, Germany, can undergo an upgrade, and parts of the database are replicated to another database using the SAP Landscape Transformation Replication Server, also provided by SAP SE. In some examples, the S/4HANA application is upgraded with zero downtime through a software update manager. In systems using replication, typically, a full replication is performed once, then only deltas are replicated to the other database. This way, the replicated set of data is always available, and up-to-date.
Executing maintenance procedures in scenarios where a portion of data is replicated between databases can present several problems. For example, replication can hinder the maintenance procedure, and the maintenance procedure can inhibit replication. That is, the replication instruments the database with triggers, and logging tables. The triggers read data written to the database, analyzes the data, and stores data in logging tables. The logging tables are read by the replication software, reading the data from the database, and writing the data to another database. If the maintenance procedure (upgrade) is to modify a table, the trigger is detected, it can be determined that a change to the table is incompatible. Consequently, the modification is not performed, so the trigger remains functioning. Further, the upgrade changes the structures of tables to adjust to the new version. This can imply changes, which can damage the trigger logic (e.g., the key of the table is altered and the trigger storing key values of the modified records in the logging table).
As another example, the upgrade can trigger full reload of replicated data. For example, if the database is relatively larger (e.g., in terms of amount of data stored), the replicated data can also be relatively large. This implies that replicating the database to another database can take a significant amount of time, and resources (e.g., processing power, bandwidth). When the upgrade modifies a table, or the content of a table, potentially the complete table has to be replicated again, which can lead to downtime on the target side of the replication, as the data set can be inconsistent during replication of the complete set. For example, the upgrade adds one column and updates the column of the table. This would fire the replication trigger for every record, thus typically, the complete set of rows is replicated again.
As another example, downtime can result from inconsistent data sets. For example, the upgrade can modify the database of the application on the source side (replicated from). If this deployment occurs with downtime, the downtime on the target side (replicated to) will experience downtime as well, as the inconsistent temporary stages during the upgrade are replicated as well. Thus, also the target side can be inconsistent.
To address issues in maintenance procedures in systems including replication between databases, such as the above issues, implementations of the present disclosure introduce a view-layer between the application, and the persistency. In some implementations, the upgrade procedure is performed with a second view-layer, which is built in parallel to a first view-layer to provide the target version structure. In this manner, modifications resulting from upgrade of the database tables, are shielded from the application running on the start version (e.g., application V1). Once the upgrade is completed, the application is switched to use the view-layer of the target (e.g., application V2).
In accordance with implementations of the present disclosure, the replication is designed to run on the view-layer, and not on the table layer of the database. In this manner, the replication is not impacted by modifications to the database layer as a result of the upgrade. In some implementations, the target view-layer is also instrumented to run replication of data. The same view-layer, and switch to the target version view-layer is introduced on the consumer-side.
To provide further context for implementations of the present disclosure, a so-called blue-green deployment for maintenance procedures is described. In some examples, the blue-green deployment can provide zero downtime deployments of new software versions. The name blue-green deployment is derived from analogy to mechanisms used for runtime blue-green deployments. In runtime blue-green deployments, a new (green) runtime is deployed in parallel to a current (blue) runtime. After deployment, users are re-directed to the green runtime, and the blue runtime is stopped.
The blue-green deployment can be used for scenarios including so-called code push down, in which database objects, such as views, table functions, and procedures, that are pushed to the database. In some examples, a separate database schema is used as the runtime environment for the code in the database. This schema is referred to as an access schema, and includes the code objects. A separate, so-called data schema holds the persistency. The application only has access to the access schema, and objects in the data schema are exposed as views to the access schema. During the blue-green deployment, a new access schema (green) is provided for the target version (e.g., V2) in parallel with the current access schema (blue) used by the source version (e.g., V1, also referred to as source, and/or production). Accordingly, during the deployment (upgrade), there are multiple access schemas existing at the same time (e.g., blue, and green).
The blue-green deployment enables structure, and name abstraction. For example, the view layer hides structure changes, and content changes in the data schema during the deployment. For example, new columns can be added during the deployment. The new columns are not selected by the view in the blue access schema. A table column can have a different name than specified (e.g., in a data dictionary (DDIC)). In this manner, complex type changes can be performed without affecting availability of the table data. For example, a new column can be added with the target type, and populated with data. In the green access schema, the target column is exposed instead of the previous column.
As another example, a table can have a different name than specified (e.g., in the DDIC). This enables a shadow table to be created and populated for a production-use table. In the green access schema, the new table is then exposed instead of the old table. In some examples, obsolete columns, and tables are deleted from the persistency after the deployment is complete. As another example, a rename column, and a rename table traditionally require an exclusive database lock, and which would result in downtime. To avoid this, the names of the tables in the data schema are not adjusted to the DDIC names after the deployment, and remain to be used during the release. The name differences are shielded from the application by the view layer in the access schema.
The blue-green deployment also enables various data migration approaches. For example, in a shadow column migration, if the data in one column only needs to be migrated (e.g., column B), a so-called shadow column (e.g., B #1) can be added to the table, and include a new data type. The shadow column is populated with data migration tools, and is kept up-to-date using database triggers. This approach limits the data volume to be copied. In some examples, the view layer in the blue access schema hides the shadow field from the running application. In some examples, the view layer in the green access schema selects the shadow column, and exposes the shadow column with the correct name (e.g., select B #1 as B). After the switch-over, the original column (B) can be dropped.
As another example, in shadow table migration, the complete table is to be copied, in case the key changes, or data migration runs on most columns. In this case, a copy of the table (e.g., TAB) is created, with a new name (e.g., TAB #1), and the new structure. The database migration is read from the blue table (TAB), and written to the green table (TAB #1). The view layer in the blue access schema hides the shadow table (TAB #1) from the running application. The view-layer in the green access schema selects the shadow table, and exposes the access table with the correct name (e.g., create view TAB as select* from TAB #1). After the switch-over, the blue table (TAB) can be dropped.
In view of the above context, implementations of the present disclosure leverage the blue-green approach, in an application upgrade, where data is also replicated between databases. In accordance with implementations of the present disclosure, replication is considered during a change event, and operates without interruption during the blue-green deployment. More particularly, and as described in further detail herein, implementations of the present disclosure enable mechanisms in the blue-green procedure to plug-in replication tasks, so replication can react to changes, and prepare replication of new content. In some implementations, the blue-green approach is implemented on both the data consumer-side (e.g., database data is replicated to) of replication, and the data source-side (e.g., database data is replicated from). This coordinated approach provides, among other advantages, reduction in resource consumption and duration for new full-upload of data for changed tables.
Implementations of the present disclosure include a deploy tool (e.g., a zero downtime maintenance (ZDM) deploy tool), and a replication system (RS). An example ZDM procedure, and respective deploy tool (referred to as an upgrade tool) are described in further detail in commonly assigned, U.S. application Ser. No. 14/495,056, filed on Sep. 24, 2014, the disclosure of which is expressly incorporated herein by reference in its entirety for all purposes.
In accordance with implementations of the present disclosure, the deploy tool runs the blue-green upgrade procedure, computes, which tables are migrated using a clone table (also referred to as shadow table), and/or a clone column (also referred to as a shadow column), and computes and executes statements to transform the source-side system (e.g., providing a green access schema), as described herein. The deploy tool provides respective statements to the RS for execution on a consumer-side system. In this manner, the consumer-side system can come into symmetry with the source-side system. In some examples, the deploy tool waits for RS to complete one or more actions, before the deploy tool conducts further actions on the consumer-side, and sends respective statements to the RS.
In accordance with implementations of the present disclosure, the RS instruments the source-side, and the consumer-side, and replicates data from the source-side to the consumer-side, even during upgrade of an application acting on the source-side. The RS performs an initial data transfer, and an incremental transfer of changed data. The RS receives statements from the deploy tool for actions to be executed on the consumer-side. The RS is triggered by the deploy tool to execute actions, and notifies deploy tool once the actions are completed.
In accordance with implementations of the present disclosure, a blue access schema (Access B) 212 is provided in the first database system 204. The Access B 212 enables the application 202 to interact with data stored in the first database system 206. In the depicted example, the Access B 212 includes a view (V: T1), and a view (V: LOGT1) on respective tables (T1 #1, TLOG1). Also, a replication trigger (RT) is provided between the views. For example, and as introduced above, the RS 208 creates the log table (TLOG1) in the data schema, the view (V: LOGT1) on the log table in the Access B 212, and the RT. A corresponding blue access schema (Access B) 214 is also provided in the second database system 204, which enables the RS 208 to replicate data into the second database system 206. In the depicted example, the Access 214 includes a view (V: T1) on a respective table (T1 #1). Accordingly, a symmetric setup is provided on the consumer-side of replication relative to the source-side.
As described herein, the RS 208 reads the log table through the Access B 212, and writes to the consumer-side through the Access B 214. Accordingly, changes done to the table in data schema (write), are not recorded by the trigger in the access schema. If a data manipulation is executed on the database table, this is not recorded by the RS 208. This provides the advantages that the RT does not need to be programmed with the name-mapping of tables and columns, the procedures are de-coupled, and the RS 208 does not need to know about intermediary structures.
With particular reference to
In further detail, during a blue-green deployment, changes made to the persistency in the replication source (source-side) are replicated to the target (consumer-side). Alternatively, and to reduce the volume of data to be sent for replication, the action executed on the source-side is mirrored on the consumer-side. For some operations, which are generic and can be executed within the database this is feasible (e.g., cloning tables, columns, altering table structures). For other operations, however, the upgrade tools would also have to run on the consumer-side (e.g., application defined methods that write content). This is neither feasible nor desirable. Accordingly, the changes are partially managed using in-place action on the consumer-side mirroring action on the source-side, and partially replicating the change done on the source-side.
In some implementations, with the blue-green deployment, multiple mechanisms are used to modify table structure and content. Example mechanisms include clone table, and clone column.
With regard to clone table, a new table is created, and a suffix with a version number is appended to the table name. For example, a table TAB #<n>, is cloned to provide a clone table TAB #<n+1>. The data is copied to from the original table to the cloned table. In case the table is read-write (R/W) for the application during the deployment, changes are replicated from TAB #<n> to TAB #<n+1> using database triggers. The data copy adjusts content to the new structure of TAB<n+1>. The deployment procedure writes data to TAB<n+1>. A projection view in the target access schema is created to expose the table to the consumers (e.g., also views). The projection view maps the database names to the DDIC names.
Implementations of the present disclosure provide a replication approach for clone table. In some implementations, the clone table is created in the target (consumer-side) database, and uses the same structure as defined in the source-side. Data copied from TAB #n to TAB #n+1 is also copied on the consumer-side. In some examples, statements used to clone the table on the source-side can be used to clone the table on the consumer-side. This minimizes the data volume to be sent from source-side to consumer-side. In some examples, changes done to the clone table after creation (e.g., writing data through deployment, writing data through replication) need to be replicated to the clone table on the consumer-side. To achieve this, a replication setup for the clone table is provided, and can include, without limitation, a log table, and a trigger (RT).
With regard to clone column, a new column is added to the table, the name of the clone column is based on the name of the original column. For example, if the column is named F, the clone column is F #1, or, if the original column is F #<n> the clone column is F #<n+1>. Data is copied from the original column to the clone column, adjusting content to the new format. In some examples, the adjustment can be done using a type mapping (e.g., HANA type mapping), using generically provided SQL code (e.g., for making NUMC longer (a number written to a string, 0-padded from the left)), or using an application-provided SQL. In case the table is R/W for the application during the deployment, changes are replicated from #<n> to #<n+1> using database triggers. A projection view in the consumer-side access schema is created to expose the table to the consumers (e.g. also views). The projection view maps the database names to the DDIC names.
Implementations of the present disclosure provide a replication approach for clone columns. A clone column created in the consumer-side database system will have the same structure as defined in the source-side database system. If the data fill is run on the source-side, and all changes replicated to the consumer-side, the full table would be replicated again, as every row is updated, and triggers operate on the row level. To avoid this, implementations of the present disclosure exclude the initial data fill for the clone column from replication, but to updates the clone column in the consumer-side database from the corresponding table in the consumer-side database. In some examples, only changes to the table performed by the application (e.g., under upgrade) are replicated. To achieve this, implementations of the present disclosure, exclude update of a clone column update from replication, execute the same clone column update executed on source-side table also on the consumer-side table, and replicate changes done by the application to the source-side table (and the triggered update on the clone column) to the consumer-side table.
In some implementations, for shadow column migration, the shadow column is filled from the original column, transforming the data upon transfer. For the shadow table the shadow table is filled from the original table, transforming the data upon transfer. The shadow column is filled on the consumer-side side from the original column. A trigger (RT) is used to replicate data from original column to shadow column also on the consumer-side. For the shadow table approach, the initial data transfer to set up the shadow table is executed on the consumer-side as well. Data replication transfers the data changes from the original table and the shadow table on the source-side to the consumer-side.
In further detail,
With particular reference to
After the upgrade procedure is initiated, the clone column F #2, and the clone table T2 #2 are created on both the source-side, and the consumer-side, as described in further detail herein. Each of the clone column F #2, and the clone table T2 #2 (on both the source-side, and the consumer-side) are empty. Data is copied from the original column/table to the respective clone column/table. In some implementations, an initial data load is performed locally in each of the source-side, and the consumer-side. For example, the data in the original column F #1 is copied to the clone column F #2 (in both the source-side, and the consumer-side). As another example, the data in the original table T2 #1 is copied to the clone table T2 #2 (in both the source-side, and the consumer-side). After the initial data load, the source-side, and the consumer-side are symmetrical, and only data deltas are replicated from the source-side to the consumer-side.
More particularly, the blue-green procedure includes phases of preparation, green system build, switch-over, and blue system deconstruction. As depicted in
The RS 408 creates a log table on T2 #2 (LOGT2 #2), a projection view to LOGT2 #2, and a replication trigger (RT) in the Access G 414′. The RS 408 is able to replicate data to the consumer side. The upgrade of the application 402 is deployed, during which data is written to T2 #2 through the respective projection view of the Access B 412, and the ZDMT (e.g., data written to T2 #1 through the projection view T2 of the Access B 412 is written to T2 #2 through the ZDMT).
After the upgrade is complete, the application (upgraded to V2) on the source-side is switched over to the green system (the Access G 412′). The RS 408 reads any remaining blue log entries, and replicates to the blue system on the consumer-side. The RS 408 is triggered to switch on consumer-side from blue to green (i.e., from the Access B 412′ to the Access G 414′). The blue system is deconstructed. For example, the log table (e.g., through the DDIC), the RS trigger (list of statements), the projection view on T2 #1 in the Access B 412, and the ZDMT on T2 #1, and the Access B 214 are dropped, and respective statements are passed to the RS 408.
In general, and as described herein, clone table replication includes set-up of the RS, a procedure executed by the RS, and replication by the RS. In some implementations, set-up of the RS includes providing a replication trigger (RT) on the projection view, adding a log table, and projection view on the log table, and executing a sequence (statements) in the data schema. In some implementations, the procedure executed by the RS includes receiving statements, creating a clone table (e.g., T2 #2), copy data from the original table (e.g., T2 #1) to the clone table (e.g., T2 #2) on the consumer-side, build a green system (e.g., create Access G including a projection view for T1 onto T2 #2), switch-over (e.g., switch the consumer-side from the Access B to the Access G), and deconstruct the blue system (e.g., execute statements from the source-side), drop the projection view the Access B, drop the ZDMT, and drop the original table T2 #1. In one example, replication includes the RS replicating from T2 #1 to the consumer-side, while the upgrade runs. This is done until the switch-over is complete, and no more data is written to T2 #1. The RS builds T2 #2 with an initial data set in the consumer-side. In another example, replication includes the RS replicating from T2 #2 to the consumer-side, after an initial data load on the consumer-side.
In some implementations, data replicated using the ZDMT from T2 #1 to T2 #2 is not recorded by RS triggers. That is, and as described herein, the trigger (RT) is on the projection view, while the ZDMT writes to the database table. Consequently, data replicated using the ZDMT is explicitly replicated by the RS. To achieve this, in some examples, the RS reads the log table on T2 #1 (e.g., LOGT2 #1), and the log table on T2 #2 (e.g., LOGT2 #2) to read the keys, but reads data from T2 #2 for the consumer-side content. If the key is in both logging tables, the content is still read from T2 #2.
In some implementations, the blue-green procedure includes phases of preparation, green system build, switch-over, and blue system deconstruction. As depicted in
The green system is built. More particularly, a projection view (V: T1) is created in the Access G 412′ to select from the clone column F #2 instead of the original column F #1. This projection view statement (e.g., DDL statement) is passed to the RS 402. A projection view (V: LOGT1) is created in the Access G 412′ for the log table (LOGT1). The RS 408 creates a RT on the projection view in the Access G 412′. Although the RS 408 has access to the Access G 412′ to read from the target release, no data is recorded or replicated until the switch-over. During the switch-over, the application on the source-side (now upgraded to application V2) is switched over to the green system. The RS 408 triggered to read from the green log table through the Access G 412′, instead of the blue. The RS 408 also triggers the consumer-side to switch over to the green system.
During deconstruction of the blue system, the RT, the projection view (V: T1) for the original table (T1 #1), and the projection view (V: LOGT1) for the log table (LOGT1) are deleted from the Access B 412, and respective statement are provided to the RS 408. The ZDMT, and the original column (F #1) are dropped, and respective statements are provided to the RS 408.
In general, and as described herein, clone column replication includes set-up of the RS, a procedure executed by the RS, and replication by the RS. In some implementations, set-up of the RS includes providing a replication trigger (RT) on the projection view, adding a log table, and projection view on the log table, and executing a sequence (statements) in the data schema. In some implementations, the procedure executed by the RS includes receiving an add column statement (e.g., DDL statement to add the clone column F #2), executing the add column statement on the consumer-side (e.g., to provide a respective clone column on the consumer-side), receiving a create ZDMT statement, executing the create ZDMT statement on the consumer-side (e.g., to provide a respective ZDMT on the consumer-side), and execute data replication from F #1 to F #2 on the consumer-side. The green system is built including a projection view of the replicated table, and the consumer 403 is switched over to the green system. The blue system is deconstructed, as described herein.
One or more clone data components are provided in the first database system (502). For example, a deploy tool (e.g., a ZDM deploy tool) triggers creation of the one or more clone data components in the first database system. As described herein, example clone data components can include a clone column (e.g., a copied column of a table, within the table), and a clone table (e.g., a copied table). A green access schema is provided in the first database system (504). For example, the deploy tool triggers creation of the green access schema, which includes one or more views, and one or more replication triggers (RTs) to replicate data from the one or more clone data components in the first database system.
Statements are provided to a replications system (506). For example, the deploy tool provides statements (e.g., DDL statements) to the replication system (e.g., the replication system 208 of
Data is replicated from the first database system to the second database system during an upgrade procedure (512). For example, the deploy tool initiates deployment of an upgrade to upgrade the application (e.g., from V1 to V2). During the upgrade, the application interacts with data in the first database system through the blue schema including data stored to, and/or modified within the one or more clone components. Also during the upgrade, the green access schema enables replication of the data to the second database system through the replication system. For example, and as described herein, a first handler of the replication system (e.g., that handles replication prompted by the current version of the application) enables replication of data to the second database system through the blue access schema of the second database system,
A switch-over is performed to the green access schema, and the blue access schema is removed (514). For example, upon completion of the upgrade, the upgraded application (e.g., the application 202′) interacts with the first database system through the green access schema. As described herein, the blue access schema is removed from the first database system. Statements are provided to a replications system (516). For example, the deploy tool provides statements (e.g., DDL statements) to the replication system. In some examples, the statements correspond to respective actions executed in the first database system to switch-over to the green access schema, and remove the blue access schema. A switch-over is performed to the green access schema, and the blue access schema is removed in the second database system (518). For example, the replication system triggers switch-over to the green access schema, and removal of the blue access schema based on the statements.
Referring now to
The memory 620 stores information within the system 600. In one implementation, the memory 620 is a computer-readable medium. In one implementation, the memory 620 is a volatile memory unit. In another implementation, the memory 620 is a non-volatile memory unit. The storage device 630 is capable of providing mass storage for the system 600. In one implementation, the storage device 630 is a computer-readable medium. In various different implementations, the storage device 630 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device. The input/output device 640 provides input/output operations for the system 600. In one implementation, the input/output device 640 includes a keyboard and/or pointing device. In another implementation, the input/output device 640 includes a display unit for displaying graphical user interfaces.
The features described can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The apparatus can be implemented in a computer program product tangibly embodied in an information carrier (e.g., in a machine-readable storage device, for execution by a programmable processor), and method steps can be performed by a programmable processor executing a program of instructions to perform functions of the described implementations by operating on input data and generating output. The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Elements of a computer can include a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer can also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, for example, a LAN, a WAN, and the computers and networks forming the Internet.
The computer system can include clients and servers. A client and server are generally remote from each other and typically interact through a network, such as the described one. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims.
A number of implementations of the present disclosure have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the present disclosure. Accordingly, other implementations are within the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5680573 | Rubin et al. | Oct 1997 | A |
5925100 | Drewry et al. | Jul 1999 | A |
5946647 | Miller et al. | Aug 1999 | A |
6055569 | O'Brien et al. | Apr 2000 | A |
6728726 | Bernstein et al. | Apr 2004 | B1 |
6810429 | Walsh et al. | Oct 2004 | B1 |
6996680 | Mogi et al. | Feb 2006 | B2 |
7284096 | Schreter | Oct 2007 | B2 |
7523142 | Driesen et al. | Apr 2009 | B2 |
7529895 | Blumrich et al. | May 2009 | B2 |
7558822 | Fredricksen et al. | Jul 2009 | B2 |
7657575 | Eberlein et al. | Feb 2010 | B2 |
7720992 | Brendle et al. | May 2010 | B2 |
7734648 | Eberlein | Jun 2010 | B2 |
7739387 | Eberlein et al. | Jun 2010 | B2 |
7941609 | Almog | May 2011 | B2 |
7962920 | Gabriel et al. | Jun 2011 | B2 |
7971209 | Eberlein et al. | Jun 2011 | B2 |
8126919 | Eberlein | Feb 2012 | B2 |
8200634 | Driesen et al. | Jun 2012 | B2 |
8225303 | Wagner et al. | Jul 2012 | B2 |
8250135 | Driesen et al. | Aug 2012 | B2 |
8275829 | Plamondon | Sep 2012 | B2 |
8291038 | Driesen | Oct 2012 | B2 |
8301610 | Driesen et al. | Oct 2012 | B2 |
8326830 | Hollingsworth | Dec 2012 | B2 |
8356010 | Driesen | Jan 2013 | B2 |
8375130 | Eberlein et al. | Feb 2013 | B2 |
8380667 | Driesen | Feb 2013 | B2 |
8402086 | Driesen et al. | Mar 2013 | B2 |
8407297 | Schmidt-Karaca et al. | Mar 2013 | B2 |
8434060 | Driesen et al. | Apr 2013 | B2 |
8392573 | Lehr et al. | May 2013 | B2 |
8467817 | Said et al. | Jun 2013 | B2 |
8473942 | Heidel et al. | Jun 2013 | B2 |
8479187 | Driesen et al. | Jul 2013 | B2 |
8510710 | Harren et al. | Aug 2013 | B2 |
8555249 | Demant et al. | Oct 2013 | B2 |
8560876 | Driesen et al. | Oct 2013 | B2 |
8566784 | Driesen et al. | Oct 2013 | B2 |
8572369 | Schmidt-Karaca et al. | Oct 2013 | B2 |
8577960 | Boiler et al. | Nov 2013 | B2 |
8600916 | Chen et al. | Dec 2013 | B2 |
8604973 | Schmidt-Karaca et al. | Dec 2013 | B2 |
8612406 | Said et al. | Dec 2013 | B1 |
8645483 | Odenheimer et al. | Feb 2014 | B2 |
8706772 | Hartig et al. | Apr 2014 | B2 |
8751573 | Said et al. | Jun 2014 | B2 |
8762929 | Driesen | Jun 2014 | B2 |
8793230 | Engelko et al. | Jul 2014 | B2 |
8805986 | Driesen et al. | Aug 2014 | B2 |
8868582 | Fitzer et al. | Oct 2014 | B2 |
8875122 | Driesen et al. | Oct 2014 | B2 |
8880486 | Driesen et al. | Nov 2014 | B2 |
8924384 | Driesen et al. | Dec 2014 | B2 |
8924565 | Lehr et al. | Dec 2014 | B2 |
8972934 | Driesen et al. | Mar 2015 | B2 |
8996466 | Driesen | Mar 2015 | B2 |
9003356 | Driesen et al. | Apr 2015 | B2 |
9009105 | Hartig et al. | Apr 2015 | B2 |
9026502 | Driesen et al. | May 2015 | B2 |
9026525 | Harren et al. | May 2015 | B2 |
9026857 | Becker et al. | May 2015 | B2 |
9031910 | Driesen | May 2015 | B2 |
9032406 | Eberlein | May 2015 | B2 |
9069832 | Becker et al. | Jun 2015 | B2 |
9069984 | Said et al. | Jun 2015 | B2 |
9077717 | Said et al. | Jul 2015 | B2 |
9122669 | Demant et al. | Sep 2015 | B2 |
9137130 | Driesen et al. | Sep 2015 | B2 |
9182979 | Odenheimer et al. | Nov 2015 | B2 |
9183540 | Eberlein et al. | Nov 2015 | B2 |
9189226 | Driesen et al. | Nov 2015 | B2 |
9223985 | Eberlein et al. | Dec 2015 | B2 |
9229707 | Borissov et al. | Jan 2016 | B2 |
9256840 | Said et al. | Feb 2016 | B2 |
9262763 | Peter et al. | Feb 2016 | B2 |
9274757 | Said et al. | Mar 2016 | B2 |
9275120 | Mayer et al. | Jun 2016 | B2 |
9436724 | Driesen et al. | Sep 2016 | B2 |
20040117398 | Idei et al. | Jun 2004 | A1 |
20060069715 | Vayssiere | Mar 2006 | A1 |
20060098253 | Masuno et al. | May 2006 | A1 |
20090210866 | Troan et al. | Aug 2009 | A1 |
20090288079 | Zuber et al. | Nov 2009 | A1 |
20090300597 | George et al. | Dec 2009 | A1 |
20100023925 | Shribman et al. | Jan 2010 | A1 |
20100153341 | Driesen et al. | Jun 2010 | A1 |
20110252415 | Ricci et al. | Oct 2011 | A1 |
20120089664 | Igelka | Apr 2012 | A1 |
20120233605 | Lupu et al. | Sep 2012 | A1 |
20120284080 | Oliveira et al. | Nov 2012 | A1 |
20130007259 | Pacheco-Sanchez et al. | Jan 2013 | A1 |
20130167079 | Ari et al. | Jun 2013 | A1 |
20140258999 | Kathar et al. | Sep 2014 | A1 |
20140325498 | Sirois et al. | Oct 2014 | A1 |
20140359594 | Erbe et al. | Dec 2014 | A1 |
20150106140 | Biewald | Apr 2015 | A1 |
20150193224 | Ziat et al. | Jul 2015 | A1 |
20160085542 | Meissner et al. | Mar 2016 | A1 |
20160098267 | Meissner et al. | Apr 2016 | A1 |
20160098438 | Eberlein et al. | Apr 2016 | A1 |
20160098443 | Specht et al. | Apr 2016 | A1 |
20160162275 | Morley et al. | Jun 2016 | A1 |
20170116296 | Specht et al. | Apr 2017 | A1 |
20170123787 | Burkhardt et al. | May 2017 | A1 |
20170161291 | Specht et al. | Jun 2017 | A1 |
20170185392 | Konrad et al. | Jun 2017 | A1 |
20170344362 | Burkhardt et al. | Nov 2017 | A1 |
20180039494 | Lander | Feb 2018 | A1 |
20180232382 | Mayer et al. | Aug 2018 | A1 |
Entry |
---|
Herrmann, Kai; Living in Parallel Realities—Co-Existing Schema Versions with a Bidirectional Database Evolution Language, , May 2017, Chicago, IL, USA, pp. 1101-1116, 2017. |
U.S. Appl. No. 14/960,983, filed Dec. 7, 2015, Eberlein, et al. |
U.S. Appl. No. 15/083,918, filed Mar. 29, 2016, Eberlein, et al. |
U.S. Appl. No. 15/087,677, filed Mar. 31, 2016, Eberlein, et al. |
U.S. Appl. No. 15/167,746, filed May 27, 2016, Burkhardt et al. |
U.S. Appl. No. 15/285,715, filed Oct. 5, 2016, Specht et al. |
U.S. Appl. No. 15/285,745, filed Oct. 5, 2016, Mayer. |
U.S. Appl. No. 15/461,236, filed Mar. 16, 2017, Richter et al. |
Number | Date | Country | |
---|---|---|---|
20200159852 A1 | May 2020 | US |