Embodiments relate to techniques for managing data traffic in environments not providing native atomic transactions to provide, for example, atomic data ingestion. More particularly, embodiments relate to techniques for managing data traffic in environments not providing native atomic transactions by, for example, utilizing two or more coordinated data tables.
A “data lake” is a collection data from multiple sources and is not stored in a standardized format. Because of this, collection of the data in the data lake is not as systematic and predictable as more structured collections of data. Thus, many of the tools that are utilized to ingest data into a data lake (or other data collection structures) do not (or cannot) provide atomic writes to the final data source.
Embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements.
In the following description, numerous specific details are set forth. However, embodiments of the invention may be practiced without these specific details. In other instances, well-known structures and techniques have not been shown in detail in order not to obscure the understanding of this description.
Some embodiments described herein can be based on a Databricks-based architecture, which is a cloud platform for running data lake applications. Databricks is available from Databricks, Inc. of San Francisco, Calif., USA. Databricks can provide a unified analytics platform that can host and run any number of apps. These embodiments are sample embodiments and other configurations with other architectures and mechanisms to support data lake applications can be provided. For example, Azure Databricks available from Microsoft could be utilized.
Some data lake implementations are based on Apache Hadoop, which provides various software utilities that provide distributed processing of large data sets across multiple computing devices. Other data lake implementations can be based on Apache Spark, which provides a framework for real time data analytics using distributed computing resources. Other platforms and mechanisms can be utilized to manage data lakes (or other large collections of data). Both Hadoop and Spark are available from The Apache Software Foundation of Wakefield, Mass., USA.
Current configurations of Databricks do not provide for the ability to run Databricks outside of a managed cloud environment, which results in challenges for providing continuous integration and continuous deployment (CI/CD) for data lake applications. Providing CI/CD introduces further complexities to leverage Databricks libraries and utilities such as, for example, Databricks Connect and Databricks Command Line Interface (CLI) due to the overhead of installation and configurations. Described herein is a CI/CD framework that can function to provide a seamless experience for developing, for example, Spark-based Delta Lake applications from local developer environments including testing, production deployment and application runs. Other non-Apache based architectures and frameworks can be supported using the novel concepts described herein.
As described in greater detail below, the architectures and techniques described herein can provide “out-of-the-box” functionalities to deploy, test and run Delta Lake Spark applications on Databricks from any environment (e.g., local, on premise, cloud). Further a template can be provided to define a deployment specification of Delta Lake Spark application(s) on Databricks. Similar architectures can be utilized to provide non-Apache and/or non-Databricks embodiments having the same or similar functionality.
In the example of
In one embodiment, Build Type 210 (BUILD_TYPE) indicates the type of build to run (e.g., PR, STAGING, PERF, PROD). Deployment environment 220 (DEPLOY_ENV) indicates the target environment to which the container will be deployed (e.g., STAGING, PERF, PROD). In one embodiment, cluster profile 230 (AWS_INSTANCE_PROFILE_ARN) indicates the cloud role assigned to the runtime cluster. The example of
In one embodiment, Databricks Host 240 (DATABRICKS_HOST) indicates the Uniform Resource Locator (URL), or other indicator, for the Databricks host. Databricks Token 250 (DATABRICKS_TOKEN) indicates the Databricks access token that will be used by the Databricks CLI or Databricks API. Image name 260 (IMAGE_NAME) is the Docker image name in the internal Docker repository.
In one embodiment, Databricks 300 can provide the environment in which the app can run. As mentioned above, Databricks is an open source computing framework that can provide an operating environment in a data lake architecture. Alternative environments can also be supported. In one embodiment, CI platform 305 can provide a build management and CI platform (e.g., server) to manage the workflow of
In one embodiment, version platform 310 can provide version control functionality for managing changes in source code or other types of files. In one embodiment, version platform 310 can be, for example, Git available from GitHub of San Francisco, Calif., USA. Other platforms can be utilized to provide version control functionality. In one embodiment, build platform 315 can provide continuous compilation, testing and deployment functionality. Build platform 315 can be, for example, sbt available from Lightbend, Inc. of San Francisco, Calif., USA. Alternative platforms, for example, Maven or Ant, can be utilized to provide continuous deployment functionality.
In one embodiment, Docker Image Repository 320 functions to store or host any number of Docker images. Docker Image Repository 320 can be, for example, a virtual machine (VM) that functions as the repository. In other example embodiments, Docker Image Repository 320 can be a Docker Hub repository.
In one embodiment, a root folder (or other structure) can be set up to contain all deployment specification files grouped by environment. An environment can be, for example, development, staging, performance, production, etc. The deployment script can be used to read files from a specified folder of an environment and be used to create or update job operations for each file. One example folder structure is described in greater detail below.
In one embodiment, CI platform 305 can pull code for the app to be deployed (340) from version platform 310. Version platform 310 returns the code (345). CI platform 305 causes the code to be assembled (350) by build platform 315, which results in, for example, a Java Archive (jar) file (355). Other programming languages and/or coding structures can also be utilized.
CI platform 305 also functions to pull a container image (360) from Docker Image Repository 320 and utilizes the received image (365) to create a container (370). The container can be utilized to deploy the jar file (375) to Databricks environment 300, run integration tests (380) in Databricks environment 300, run deploy job script (385) in Databricks environment 300, and/or run the app (390) in Databricks environment 300.
In one embodiment, all job definition files are read, 400. For each job, if the job has been created in the Databricks environment (as described with respect to
After the job has been created (420) or updated (430), the run list is checked to determine if the job is included, 440. If the job is on the run list, 440, the job is run, 450. If the job is not on the run list, 440, the process can terminate. The process of
In one embodiment, within jobs folder (“databricks-jobs”) 500, one or subfolders (e.g., 520, 540) can be utilized to further organize files related to providing a Databricks-based CI/CD environment. The example files (e.g., 530, 550) illustrated in
As discussed above, Databricks is a cloud platform for running, for example, Delta Lake Spark applications. However, in Databricks environments (or similar platforms), there are many challenges to applying and automating integration testing. For example, it is difficult to build an isolated application runtime context that may be required by each integration test suite execution on a shared cloud-based environment. Delta Lake Spark applications, for example, are richly integrated with external services including, for example, Apache Kafka for stream processing, Delta Lake Tables for database (e.g., SQL) operations, Apache Zookeeper, etc. Further, given that these applications provide a dedicated, relatively small piece of functionality as a micro-service, then number of applications may grow rapidly.
Without a good mechanism to effectively organize the applications, deploy testbeds and develop test scenarios, the resulting applications can be vulnerable to bugs, defects and errors. Various concepts and designs are described to overcome these challenges and provide a robust quality assurance mechanism for applications on a shared cloud platform environment.
At a high level, Delta Lake Spark applications can be utilized to provide three types of stream processing: 1) data stream ingestion with associated writes to multiple tables (e.g., data tables); 2) change data capture (CDC) stream management (aka, mutations) for data sources and merging the mutations with the multiple tables; and 3) data privacy management (e.g., General Data Protection Regulation, or GDPR, processing) to manage data retention and delete functionality.
As described in greater detail below, an organizational concept referred to herein as a “scenario” can be utilized to effectively organize applications and associated integration tests. In general, a scenario is a group of relevant applications that collectively fulfill certain kinds of functionalities. In various embodiments, test cases are developed per scenario.
In one embodiment, each scenario is defined by a scenario specification that includes three components: 1) metadata that describes the scenario; 2) a list of applications needed to perform the relevant functionalities of the scenario; and 3) a list of integration tests to verify the expected behaviors of the applications in the scenario.
As illustrated in the examples below, in the example embodiments, each scenario specification is self-contained and only addresses one application function, the applications and tests are only relevant to that application function, and one application may appear across multiple scenario specifications.
As one example, an EngagementIDMutation scenario can be defined as:
As another example, a NameMutation scenario can be defined as:
In one embodiment, engagement app(s) 630 can function to manage the ingestion of data into a data lake or other environment. As part of this process, engagement app(s) 630 can write to data table(s) 640. CDC process app(s) 650 can function to mange mutations (or merging) of data in data table(s) 640. GDPR process app(s) 660 can function to manage deletions of data from data table(s) 640.
In one embodiment, engagement ingestion app(s) 730 can function to manage the ingestion of data into a data lake or other environment. As part of this process, engagement app(s) 730 can write to engagement data table 740 and to name data table 745. Engagement ID Mutation app(s) 750 can function to manage mutations (or merging) of data in engagement data table 740. Similarly, Name Mutation app(s) 760 can function to manage mutations (or merging) of names in name data table 745.
In one embodiment, logic within container 805 can read one or more scenario specifications, 800 and 825. In some example embodiments, the scenario specifications can be in the form of JavaScript Object Notation (JSON) files and can be organized in directories as illustrated in
Retrieved specifications (825) can be utilized to create a cluster (830) in Databricks environment 810. A cluster ID is returned (835) from Databricks environment 810 to container 805. The logic in container 805 can create a namespace string, 840 to create scoped objects. In one embodiment, a namespace is a globally unique string to define the scope of an application runtime context. Use of the namespace string and benefits from its use are described in greater detail below with respect to
The namespace string can be utilized to deploy one or more apps (845) to Databricks environment 810 by passing the cluster ID and the namespace to build a private testbed. The logic in container 805 can then cause one or more database tables to be created (850) in Databricks environment 810.
After the apps have been deployed, logic within container 805 can run tests (806) with integration test platform 855. The resulting test data can be published (865) to Kafka environment 815, or a similar data streaming environment. In one embodiment, integration test platform 855 can provide one or more query tables (870) to Databricks environment 810.
The query tables can be utilized for testing within Databricks environment 810 and test results can be returned (875) to integration test platform 855. Clean-up operations can then be performed (880) in response to integration test platform 855, and clean-up operations can be performed (885) in response to logic in container 810. Upon completion, the cluster can be terminated, 890.
In one embodiment, each integration test execution (with associated scenario specification) is run in a private testbed in which: 1) a new cluster is created as a separate runtime environment, and 2) scoped objects with a given namespace are created by applications (and their dependent services), which can be considered Isolated Application Runtime Context (IARC).
As used herein, application context is an abstraction of an application runtime as a component to build an isolated runtime context. In one embodiment, it could be a SCALA class. In one embodiment, the namespace and application properties can be used as inputs to generate scoped properties with which dependent services can use to create scoped objects. Examples of dependent services include Apache Kafka, Zookeeper, AWS S3, Sumo Logic, Datadog.
The first column (“Property Key”) is the set of property keys to be consumed by dependency services, second column (“Value”) contains sample values for those property keys, and the third column (“Scoped Value”) indicates the resulting scoped values. Scoped properties 930 can then be utilized to create one or more dependent services (e.g., 940, 942, 944), each of which can have corresponding scoped objects (e.g., 950, 952, 954).
The techniques and architectures described herein can provide separation of concerns in terms of deployment and integration testing for Delta Lake Spark (and other types of) application in a shared environment (e.g., Databricks). In some embodiments, with the design of a scenario, applications and tests can be organized, and test cases can be developed within a scenario. In some embodiments, utilizing scenario specifications, scenario-based deployment and integration testing can be achieved. In some embodiments, utilizing namespaces and application context, an isolated application runtime context can be built for each integration testing execution in a shared (e.g., Databricks) environment.
Electronic system 1100 can provide the functionality of one or more of the platforms and/or environments described above (e.g., a Databricks environment, a CI platform). Further, electronic system 1100 can provide the data lake functionality and/or ingestion functionality discussed above. In alternative embodiments, different numbers of electronic systems can be interconnected (e.g., via networks and/or direct connection) to provide increased bandwidth and/or resources.
In one embodiment, electronic system 1100 includes bus 1105 or other communication device to communicate information, and processor 1110 coupled to bus 1105 that may process information. While electronic system 1100 is illustrated with a single processor, electronic system 1100 may include multiple processors and/or co-processors and each processor can include multiple processor cores. Electronic system 1100 further may include random access memory (RAM) or other dynamic storage device 1120 (referred to as main memory), coupled to bus 1105 and may store information and instructions that may be executed by processor 1110. Main memory 1120 may also be used to store temporary variables or other intermediate information during execution of instructions by processor 1110.
Electronic system 1100 may also include read only memory (ROM) and/or other static storage device 1130 coupled to bus 1105 that may store static information and instructions for processor 1110. Data storage device 1140 may be coupled to bus 1105 to store information and instructions. Data storage device 1140 such as a magnetic disk or optical disc and corresponding drive may be coupled to electronic system 1100. Instructions stored on data storage device 1140 can be executed by processor 1110 to provide some or all of the functionality described herein.
Electronic system 1100 may also be coupled via bus 1105 to display device 1150, such as a liquid crystal display (LCD) or other display device, to display information to a user. Alphanumeric input device 1160, including alphanumeric and other keys, may be coupled to bus 1105 to communicate information and command selections to processor 1110. Another type of user input device is cursor control 1170, such as a mouse, a trackball, or cursor direction keys to communicate direction information and command selections to processor 1110 and to control cursor movement on display 1150.
Electronic system 1100 further may include network interface(s) 1180 to provide access to a network, such as a local area network. Network interface(s) 1180 may include, for example, a wireless network interface having antenna 1185, which may represent one or more antenna(e). Network interface(s) 1180 may also include, for example, a wired network interface to communicate with remote devices via network cable 1187, which may be, for example, an Ethernet cable, a coaxial cable, a fiber optic cable, a serial cable, or a parallel cable.
In one embodiment, network interface(s) 1180 may provide access to a local area network, for example, by conforming to IEEE 802.11b and/or IEEE 802.11g standards, and/or the wireless network interface may provide access to a personal area network, for example, by conforming to Bluetooth standards. Other wireless network interfaces and/or protocols can also be supported.
IEEE 802.11b corresponds to IEEE Std. 802.11b-1999 entitled “Local and Metropolitan Area Networks, Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications: Higher-Speed Physical Layer Extension in the 2.4 GHz Band,” approved Sep. 16, 1999 as well as related documents. IEEE 802.11g corresponds to IEEE Std. 802.11g-2003 entitled “Local and Metropolitan Area Networks, Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, Amendment 4: Further Higher Rate Extension in the 2.4 GHz Band,” approved Jun. 27, 2003 as well as related documents. Bluetooth protocols are described in “Specification of the Bluetooth System: Core, Version 1.1,” published Feb. 22, 2001 by the Bluetooth Special Interest Group, Inc. Associated as well as previous or subsequent versions of the Bluetooth standard may also be supported.
In addition to, or instead of, communication via wireless LAN standards, network interface(s) 1180 may provide wireless communications using, for example, Time Division, Multiple Access (TDMA) protocols, Global System for Mobile Communications (GSM) protocols, Code Division, Multiple Access (CDMA) protocols, and/or any other type of wireless communications protocol.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting.
Number | Name | Date | Kind |
---|---|---|---|
5577188 | Zhu | Nov 1996 | A |
5608872 | Schwartz et al. | Mar 1997 | A |
5649104 | Carleton et al. | Jul 1997 | A |
5715450 | Ambrose et al. | Feb 1998 | A |
5761419 | Schwartz et al. | Jun 1998 | A |
5819038 | Carleton et al. | Oct 1998 | A |
5821937 | Tonelli et al. | Oct 1998 | A |
5831610 | Tonelli et al. | Nov 1998 | A |
5873096 | Lim et al. | Feb 1999 | A |
5918159 | Fomukong et al. | Jun 1999 | A |
5963953 | Cram et al. | Oct 1999 | A |
6092083 | Brodersen et al. | Jul 2000 | A |
6169534 | Raffel et al. | Jan 2001 | B1 |
6178425 | Brodersen et al. | Jan 2001 | B1 |
6189011 | Lim et al. | Feb 2001 | B1 |
6216135 | Brodersen et al. | Apr 2001 | B1 |
6233617 | Rothwein et al. | May 2001 | B1 |
6266669 | Brodersen et al. | Jul 2001 | B1 |
6295530 | Ritchie et al. | Sep 2001 | B1 |
6324568 | Diec | Nov 2001 | B1 |
6324693 | Brodersen et al. | Nov 2001 | B1 |
6336137 | Lee et al. | Jan 2002 | B1 |
D454139 | Feldcamp | Mar 2002 | S |
6367077 | Brodersen et al. | Apr 2002 | B1 |
6393605 | Loomans | May 2002 | B1 |
6405220 | Brodersen et al. | Jun 2002 | B1 |
6434550 | Warner et al. | Aug 2002 | B1 |
6446089 | Brodersen et al. | Sep 2002 | B1 |
6535909 | Rust | Mar 2003 | B1 |
6549908 | Loomans | Apr 2003 | B1 |
6553563 | Ambrose et al. | Apr 2003 | B2 |
6560461 | Fomukong et al. | May 2003 | B1 |
6574635 | Stauber et al. | Jun 2003 | B2 |
6577726 | Huang et al. | Jun 2003 | B1 |
6601087 | Zhu et al. | Jul 2003 | B1 |
6604117 | Lim et al. | Aug 2003 | B2 |
6604128 | Diec | Aug 2003 | B2 |
6609150 | Lee et al. | Aug 2003 | B2 |
6621834 | Scherpbier et al. | Sep 2003 | B1 |
6654032 | Zhu et al. | Nov 2003 | B1 |
6665648 | Brodersen et al. | Dec 2003 | B2 |
6665655 | Warner et al. | Dec 2003 | B1 |
6684438 | Brodersen et al. | Feb 2004 | B2 |
6711565 | Subramaniam et al. | Mar 2004 | B1 |
6724399 | Katchour et al. | Apr 2004 | B1 |
6728702 | Subramaniam et al. | Apr 2004 | B1 |
6728960 | Loomans | Apr 2004 | B1 |
6732095 | Warshavsky et al. | May 2004 | B1 |
6732100 | Brodersen et al. | May 2004 | B1 |
6732111 | Brodersen et al. | May 2004 | B2 |
6754681 | Brodersen et al. | Jun 2004 | B2 |
6763351 | Subramaniam et al. | Jul 2004 | B1 |
6763501 | Zhu et al. | Jul 2004 | B1 |
6768904 | Kim | Jul 2004 | B2 |
6782383 | Subramaniam et al. | Aug 2004 | B2 |
6804330 | Jones et al. | Oct 2004 | B1 |
6826565 | Ritchie et al. | Nov 2004 | B2 |
6826582 | Chatterjee et al. | Nov 2004 | B1 |
6826745 | Coker et al. | Nov 2004 | B2 |
6829655 | Huang et al. | Dec 2004 | B1 |
6842748 | Warner et al. | Jan 2005 | B1 |
6850895 | Brodersen et al. | Feb 2005 | B2 |
6850949 | Warner et al. | Feb 2005 | B2 |
7289976 | Kihneman et al. | Oct 2007 | B2 |
7340411 | Cook | Mar 2008 | B2 |
7620655 | Larsson et al. | Nov 2009 | B2 |
20010044791 | Richter et al. | Nov 2001 | A1 |
20020022986 | Coker et al. | Feb 2002 | A1 |
20020029161 | Brodersen et al. | Mar 2002 | A1 |
20020029376 | Ambrose et al. | Mar 2002 | A1 |
20020035577 | Brodersen et al. | Mar 2002 | A1 |
20020042264 | Kim | Apr 2002 | A1 |
20020042843 | Diec | Apr 2002 | A1 |
20020072951 | Lee et al. | Jun 2002 | A1 |
20020082892 | Raffel et al. | Jun 2002 | A1 |
20020129352 | Brodersen et al. | Sep 2002 | A1 |
20020140731 | Subramaniam et al. | Oct 2002 | A1 |
20020143997 | Huang et al. | Oct 2002 | A1 |
20020152102 | Brodersen et al. | Oct 2002 | A1 |
20020161734 | Stauber et al. | Oct 2002 | A1 |
20020162090 | Parnell et al. | Oct 2002 | A1 |
20020165742 | Robins | Nov 2002 | A1 |
20030004971 | Gong et al. | Jan 2003 | A1 |
20030018705 | Chen et al. | Jan 2003 | A1 |
20030018830 | Chen et al. | Jan 2003 | A1 |
20030066031 | Laane | Apr 2003 | A1 |
20030066032 | Ramachadran et al. | Apr 2003 | A1 |
20030069936 | Warner et al. | Apr 2003 | A1 |
20030070000 | Coker et al. | Apr 2003 | A1 |
20030070004 | Mukundan et al. | Apr 2003 | A1 |
20030070005 | Mukundan et al. | Apr 2003 | A1 |
20030074418 | Coker | Apr 2003 | A1 |
20030088545 | Subramaniam et al. | May 2003 | A1 |
20030120675 | Stauber et al. | Jun 2003 | A1 |
20030151633 | George et al. | Aug 2003 | A1 |
20030159136 | Huang et al. | Aug 2003 | A1 |
20030187921 | Diec | Oct 2003 | A1 |
20030189600 | Gune et al. | Oct 2003 | A1 |
20030191743 | Brodersen et al. | Oct 2003 | A1 |
20030204427 | Gune et al. | Oct 2003 | A1 |
20030206192 | Chen et al. | Nov 2003 | A1 |
20030225730 | Warner et al. | Dec 2003 | A1 |
20040001092 | Rothwein et al. | Jan 2004 | A1 |
20040010489 | Rio | Jan 2004 | A1 |
20040015981 | Coker et al. | Jan 2004 | A1 |
20040027388 | Berg et al. | Feb 2004 | A1 |
20040128001 | Levin et al. | Jul 2004 | A1 |
20040186860 | Lee et al. | Sep 2004 | A1 |
20040193510 | Catahan, Jr. et al. | Sep 2004 | A1 |
20040199489 | Barnes-Leon et al. | Oct 2004 | A1 |
20040199536 | Barnes-Leon et al. | Oct 2004 | A1 |
20040199543 | Braud et al. | Oct 2004 | A1 |
20040249854 | Barnes-Leon et al. | Dec 2004 | A1 |
20040260534 | Pak et al. | Dec 2004 | A1 |
20040260659 | Chan et al. | Dec 2004 | A1 |
20040268299 | Lei et al. | Dec 2004 | A1 |
20050050555 | Exley et al. | Mar 2005 | A1 |
20050091098 | Brodersen et al. | Apr 2005 | A1 |
20090177744 | Marlow et al. | Jul 2009 | A1 |
20170169370 | Cornilescu | Jun 2017 | A1 |
20190294536 | Avisror | Sep 2019 | A1 |