The present invention relates generally to data processing, and more particularly to modeling data exchange in a data flow associated with an extract, transform, and load (ETL) process.
Extract, transform, and load (ETL) is a process in data warehousing that involves extracting data from outside sources, transforming the data in accordance with particular business needs, and loading the data into a data warehouse. An ETL process typically begins with a user defining a data flow that defines data transformation activities that extract data from, e.g., flat files or relational tables, transform the data, and load the data into a data warehouse, data mart, or staging table. A data flow, therefore, typically includes a sequence of operations modeled as data flowing from various types of sources, through various transformations, and finally ending in one or more targets, as described in U.S. patent application entitled “Classification and Sequencing of Mixed Data Flows” incorporated by reference above. In the course of execution of a data flow, data sometimes needs to be exchanged or staged at intermediate points within the data flow. The staging of data typically includes saving the data temporarily either in a structured physical storage medium (such as in a simple file) or in database temporary tables or persistent tables. In some cases, it may be optimal to save rows of data in the processing program's memory itself, especially when large and fast caches are present in the system (such “staging” is often referred to as “caching”).
ETL vendors conventionally support data exchange and staging internally inside of an ETL engine in a proprietary fashion, especially if the ETL engine is running outside of a relational database. For example, the DataStage ETL engine permits users to build “stages” of operations—i.e., discrete steps in the transformation sequence—and physically move rows between different stage components in memory. (Note: The term “stage” as used in the context of the DataStage engine—does not refer to the concept of saving rows to a physical media, but rather to unique operational steps). This method, typically allows for some types of performance optimizations; however, the rows of data being moved between the different stages are usually in an internal format (stored in internal memory formats in buffer pools) and the only way a user can view the rows of data is to explicitly define a File Target (or a Table Target) in the data flow and force the rows of data to be saved into a file (or a table)—i.e., only the target of such a data flow can physically export the rows into a user recognizable format.
Accordingly, a common problem of conventional data exchange and staging techniques is that users are not able to specify staging points explicitly and directly in the middle of a data flow, but only as the end of a transformation sequence using target operators. Target operators typically do not serve as an exchange operator—since target operators are destinations. For example, if a user needs to extract rows from a SQL (structured query language) table and pass the rows as input to another type of system which requires a file as input, then the user would have to represent such a process with a first job—as a Table Source operation followed by a File Target or Export operation having a specific file name. The user would then have to schedule a second (separate) job to invoke an operation that uses the file as input.
In general, this specification describes methods, systems, and computer program products for generating code from a data flow associated with an extract, transform, and load (ETL) process. In one implementation, the method includes identifying a data exchange requirement between a first operator and a second operator in the data flow. The first operator is a graphical object that represents a first data transformation step in the data flow and is associated with a first type of runtime engine, and the second operator is a graphical object that represents a second data transformation step in the data flow and is associated with a second type of runtime engine. The method further includes generating code to manage data staging between the first operator and the second operator in the data flow associated with the ETL process. The code exchanges data from a format associated with the first type of runtime engine to a format associated with the second type of runtime engine.
Particular implementations can include one or more of the following advantages. In one aspect, a data station operator is provided that can be inserted into a data flow of an ETL process, in which the data station operator represents a staging point in a data flow. The staging is done to store intermediate processed data for the purpose of tracking, debugging, ease of data recovery, and optimization purposes. In one implementation, the data station operator also permits data exchange between two linked operators that are incompatible in a same single job. Relative to conventional techniques that requires two separate jobs to perform a data exchange between two operators that are incompatible, it is more optimal to use one single job that encompasses both systems, especially if the job is run in parallel and in batches—e.g., if upstream producers and downstream consumers work in sync in a parallel and batch driven mode, the end performance is better.
The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features and advantages will be apparent from the description and drawings, and from the claims.
Like reference symbols in the various drawings indicate like elements.
The present invention relates generally to data processing, and more particularly to modeling data-exchange in a data flow associated with an extract, transform, and load (ETL) process. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. The present invention is not intended to be limited to the implementations shown but is to be accorded the widest scope consistent with the principles and features described herein.
Running on the programmed computer 204 is an integrated development environment 208. The integrated development environment 208 is a software component that assists users (e.g., computer programmers) in developing, creating, editing, and managing code for target platforms.
In one implementation, the integrated development environment 208 includes code generation system 210 that (in one implementation) is operable to generate code to manage data exchange and data staging within a sequence of operations defined in a data flow of an ETL process, as discussed in greater detail below. In one implementation, the code generator 210 generates code using techniques as described in U.S. patent application entitled “Classification and Sequencing of Mixed Data Flows,” Ser. No. 11/372,540, filed on Mar. 10, 2006 (the '540 application), which is incorporated by reference above.
In operation, a data flow 212 (e.g., an ETL data flow) is received by the code generation system 210, and the data flow 212 is converted by the code generation system into a logical operator graph (LOG) 214. The logical operator graph 214 is a normalized, minimalist representation of the data flow 212 that includes logical abstract collection of operators (including, e.g., one or more of a splitter operator, join operator, filter operator, table extract operator, bulk load operator, aggregate operator, and so on). In some implementations, all of the contents of the data flow 212 may be used “as-is” by the code generation system 210 and, therefore, the logical operator graph 214 will be the same as the data flow 212. The code generation system 210 converts the logical operator graph 214 into a query graph model (QGM graph) 216. The QGM graph 216 is an internal data model used by the code generation system 210 for analysis and optimization processes, such as chunking (in which a subset of a data flow is broken into several pieces to improve performance) and execution parallelism (in which disparate sets of operations within a data flow are grouped and executed in parallel to yield better performance). After analysis, the QGM 216 is converted into an extended plan graph 218. The extended plan graph 218 represents code generated by the code generation system 210 and is sent to a runtime engine (e.g., an ETL engine) for execution.
In one implementation, the integrated development environment 208 includes a data flow graphical editor (not shown) that enables users to build data flows (e.g., data flow 212). In one implementation, the data flow graphical editor provides a new operator—i.e., a data station operator—that a user can directly drag and drop into a data flow to link a preceding (“upstream”) operator and one or more subsequent (“downstream”) operators, which data station operator specifies a data staging point in the data flow. In general, operators are represented in a data flow as graphical objects. In one implementation, the data station operator can be used as a link between a first operator (or operation) associated with first runtime engine (e.g., a relational database management system) and a second operator associated with a second runtime engine (e.g., a DataStage ETL engine).
In one implementation, the code generation system 210 is operable to automatically place individual data station operators (into a sequence of operations defined by a data flow) whenever a data exchange requirement is identified during the code generation process. In one implementation, the identification and insertion of the data exchange/staging points are seamless to the end user. Accordingly, in such an implementation, the code generation system 210 is operable to automatically generate code that manages data staging and data exchange on an ETL system that is capable of integrating various data processing runtime engines. For example, if a particular runtime engine can work with flat files as well as database tables, depending on certain optimization considerations, there may not be an exchange necessary, or if flat files are determined to be processed faster, then a file staging from an upstream operation (e.g., associated with a relational database engine) may be decided by the code generation system 210 to be more appropriate—or a decision could be made based on current system loads. A dynamic decision (based on various cost-benefit analyses) on whether a data station operator is required, may be best decided by the code generation system 210. In such cases, any suitable cost-benefit criteria can be implemented. In some cases, however, (expert) users or database administrators may have better knowledge than the code generation system 210 because of an understanding of expected data and expected system stress, e.g., when data is range partitioned and the administrator would be aware of which particular database partition nodes would be stressed. In such cases, it may be more appropriate for a user to explicitly override any staging options automatically selected by the code generation system 210 (or for a user to explicitly define a different staging format when the code generation system 210 does not add one by default).
Accordingly, unlike a conventional system in which a user must represent data staging using two or more jobs in order to exchange data from one runtime format (e.g., database table) to another runtime format (e.g., a flat file) in a data flow, the data processing system 200 permits a user to exchange data from one system-format to another in the same single job through the data station operator. Users can, therefore, use such data stations to explicitly identify points of staging or exchange interest, e.g., for diagnostics, for performance improvements, or for overriding any default choices made by the code generation system 210.
User input is received inserting a second operator associated with a second type of runtime engine into the data flow (step 404). In one example, the first operator can be associated with a relational database engine and the second operator can be associated with a DataStage ETL engine. The first operator and the second operator can be transform operators that represent data transformation steps in the data flow. User input is received inserting a data station operator (e.g., data station operator 300) into the data flow between the first operator and the second operator to link the first operator and the second operator (step 406). Thus, the data processing system permits the user to explicitly add a data staging operator into a data flow, in which the data staging operator exchanges data from a format associated with the first runtime engine into a format associated with the second runtime engine in a same single job.
Pre-determined criteria upon which the code generation system (or a user) may decide to insert a data station operator into a data flow include, for example, criteria associated with optimization, error recovery and restart, diagnostics and debugging, and cross-system data exchanges. With regard to optimization, intermediate (calculated) data may be staged to avoid having to perform the same calculation multiple times, especially in cases where the output of a single upstream operation is required by multiple downstream operations. Even when there is only one downstream consumer of the output data of a given operation, it may be prudent to stage rows of the output data, especially to a physical storage, in order to either free up memory or avoid stressing an execution system (for example, to avoid running out of database log space). With respect to error recovery and restart, in complex systems, errors during the execution of data flows may occur either due to bad (dirty) data which may cause database inconsistencies, or fatal errors due to software failures, power loss etc. In many cases, manual intervention is required to bring databases and other systems back to a consistent state. Thus in one implementation, the code generation system (or user) inserts data station operators at specific consistency check points in the data flow, so that staging can be performed on intermediate results in a physical media (for example, in database persistent tables or files). Accordingly, restarts (either manual or automatic) can be performed starting at these check points, thus, saving quite a bit of time.
In terms of diagnostics and debugging, staging may allow administrators to identify the core cause of problems, for example, an administrator can inspect staged rows to find bad data, which may even require the administrator to re-organize ETL processes to first clean such data. Users may also explicitly add data stations in a data flow to aid in debugging of the data flow, e.g., during development and testing cycles. An inspection of such staged rows will provide a validation of whether the corresponding upstream operations did indeed perform as expected. With regard to cross-system exchanges, a data processing system that is capable of integrating various data processing engines such as the one described in the '540 application, the data being processed in such a data processing system is a mix of various data types and formats that are specific to a given underlying (runtime) data processing engine. Some runtime processing systems may be equipped to process data inside database tables, others may only work with flat files, while others may perform better using Message Queues. In some scenarios, external systems in a different (remote) site may be required to complete part of an operation—e.g., a “Name Address Lookup” facility which may be provided by an online vendor for cleansing customer addresses. Such an external vendor may even require a SOAP-based web service means of data movement.
For example,
Referring back to
Provided below is further discussion regarding implementations of a data station operator and uses thereof.
Model of a Data Station Operator
In one implementation, a data station operator models a data exchange/staging object, and a code generator system generates code that supports staging and data exchange functionalities based on the data station operator. In one implementation, a data station operator is modeled using a data flow operator modeling framework as described in the '540 application. More generally, the concept of an operator is generic to many different ETL or transformation frameworks and, therefore, the concept of a data station operator can be extended for systems other many types of data processing systems. In one implementation, a data station operator has one input port, and one output port, and includes one or more of the following attributes as shown in Table 1 below.
Advantages of a Data Station Operator
Advantages of a data station operator include the following. With respect to performance, depending on the underlying runtime engine in which an ETL process is executed, staging intermediate data can yield better performance by controlling where and how data is flown through. For example, when the underlying ETL engine is a database server (e.g., DB2), the execution code of one data flow can be represented to one or several SQL statements. One single SQL statement can contain several levels of nested sub-queries to represent many transform operations. However, one single SQL statement could lead to runtime performance problems on certain DB servers. For example, two common problems could occur which are caused by one long SQL statement: 1) the log size that is required to run the SQL can be large if the number of nested queries reaches a certain level; 2) a single (nested) query is limited to DB vendor's query processing capability. In some cases, a single SQL statement will not work. In such case it is desirable to break the single SQL statement into smaller pieces for better performance.
With regard to data (format) exchange, when a data flow includes a mix of SQL operators and non-SQL operators, it is generally not possible to represent the data flow using one common language. Data flows through must be “staged” in order to transit from one type of operator to another. For example, consider a data flow that extracts data from a JDBC (Java Database Connectivity) source, goes through a couple of transformations, and then ends with the data being loaded into a target table. The code representing JDBC extraction is a java program, whereas the transformations and loads can be presented by SQL statements. In such cases, the output row sets from the JDBC extractor are staged into a DB2 table prior to sending the row sets to the following transform node.
A data station operator also permits tracing of data within a data flow. Providing a tracing functionality in a data flow permits users to monitor and track data flow runtime execution, and helps users diagnose problems when errors occur. Providing a data station operator permits a user to explicitly specify a staging point for an operator in a data flow at which a stage table/file will be created to capture all intermediate data that have been processed up to the staging point. Additional diagnostic information for the staging point can also be captured, such as number of rows processed, code being executed, temporary tables/files created, and so on. A data station operator also provides error recovery capability for a data flow. For example, when the execution of a data flow fails, the code generation system, or user, can select to begin a recovery process from a staged point where intermediate processed data is still valid. This permits for faster recovery from a failure relative to having to restart from the beginning of a data flow.
Pre-Determined Criteria for Inserting a Data Station Operator into a Data Flow
A data exchange/staging point identifies a position where data exchange/staging is required in a data flow—e.g., either on a link or an output port. In one implementation, a staging point in a data flow is identified when one of the following conditions arises:
An explicit exchange/staging point specified by user. A staging point can be explicitly specified by the user using a data station operator. For example, a user can specify a staging point where the user wants to examine intermediate data sets processed during runtime, which helps for debugging and diagnosis purposes when error occurs. Optionally, the user can specify the data station repository type as well.
An implicit exchange/staging point identified by a code generation system. There are situations where implicit staging points are required. For example, in one implementation, staging points are required for an operator in a dataflow that requires chunking—e.g., a splitter operator requires chunking if there are multiple output streams going into different targets. Custom operator can also specify if input streams and/or output streams need to be chunked. In general, operators that typically require staging include splitter operators, operators that support the discarding of rows, and custom operators that require staging. Implicit staging point may also be required for those operations of a given operator that need to be broken into multiple parts to improve performance. The following operators are example candidate operators for which a staging point may be required. Inner join operator—an inner join operator can have multiple inputs, and perform a SQL join on multiple tables. Performance of a SQL join operation depends on the underlying database query processing. It is, therefore, desirable to split one large join into multiple ones with smaller join cardinalities. In such a case, staging points are required at intermediate join stages. The type of data station can be a global temporary table for optimal performance. Key lookup operator—a key lookup operator is implemented using a SQL inner join operation and, therefore, key lookup operators can be processed similar to inner join operators.
In one implementation, when a code generation system chunks a data flow into several small pieces, staging tables and staging files are created and maintained to hold intermediate row sets during an ETL process—e.g., data between extract and transform, between transform and load, or a chunking point inside a data flow. In one implementation, staging tables are database relational tables, and depending on how staging tables are used, a given stage table can be either a permanent table on ETL transform database, or a temporary table created in the data transformation session. In one implementation, staging files are flat files that hold intermediate transformed data in the text format. Staging tables and staging files can be created on a transform engine. A user can also input other specifications of a staging object, such as (table) spaces, indexes used for staging tables, location for staging files.
Staging Tables
In one implementation, staging tables are used to hold intermediate row sets during an ETL process. A code generation system can maintain a staging table, including DDL (Database Definition Language) associated table spaces and indexes. The “lifetime” of a staging table (e.g., the duration of a stage table and when should the staging table be deleted) can be externally specified by a user or internally determined by a code generation system depending on the usage of the staging object. For example, if a staging table is generated internally by a code generation system, and is used only for a specific dataflow stream, the staging table can be created at the beginning of the data flow execution as a database temporary table, which temporary table will be deleted when the session ends. If however, an internal staging table is used to chunk a data flow into multiple parallel execution pieces, the staging object can be defined as database permanent table to hold intermediate row sets until the end of an ETL application execution.
Staging Files
In one implementation, staging files are flat, text files. A flat file is a text-based ASCII file that is commonly used as a bridge between non-relational data sets and relational database tables. Staging flat files can be generated by a database export utility (such as DB2 SQL export) to export data from relational DB tables, or can be generated using a custom operator interface provided by a code generation system. Flat files can be loaded into target tables through a database load utility such as DB2 load.
JDBC Result Sets
JDBC result sets are the exchange point between two or more operators. The results of a previous (upstream) operator are represented as JDBC result sets and consumed by following (downstream) operators. JDBC result sets are memory objects and, in one implementation, the handles/names of the memory objects are determined by the code generation system.
Automatically Placed Data Station Operators
For internally generated staging points (e.g., those staging points not explicitly defined by a user), a code generation system can analyze the internal presentation of a data flow (e.g., through a QGM), identify staging points and insert data station operators that chunk the data flow into multiple smaller pieces (or sub-flows). Between these sub-flows, staging tables can be used to temporarily store intermediate transformed result sets. For example, when a chunking point is identified, a QGM can include staging tables/files (e.g., represented as table/file boxes) that link to other QGM nodes. QGM In one implementation, the name of each staging table within a QGM is unique. In one implementation, DDL statements for all staging tables generated within a data flow will be returned.
In one implementation, a code generation system (e.g., code generation system 210 of
One or more of method steps described above can be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Generally, the invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. In one implementation, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
Furthermore, the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk—read only memory (CD-ROM), compact disk—read/write (CD-R/W) and DVD.
Memory elements 804A-B can include local memory employed during actual execution of the program code, bulk storage, and cache memories that provide temporary storage of at least some program code in order to reduce the number of times the code must be retrieved from bulk storage during execution. As shown, input/output or I/O devices 808A-B (including, but not limited to, keyboards, displays, pointing devices, etc.) are coupled to data processing system 800. I/O devices 808A-B may be coupled to data processing system 800 directly or indirectly through intervening I/O controllers (not shown).
In one implementation, a network adapter 810 is coupled to data processing system 800 to enable data processing system 800 to become coupled to other data processing systems or remote printers or storage devices through communication link 812. Communication link 812 can be a private or public network. Modems, cable modems, and Ethernet cards are just a few of the currently available types of network adapters.
Various implementations for modeling data exchange in a data flow associated with an extract, transform, and load (ETL) process have been described. Nevertheless, various modifications may be made to the implementations, and those variations would be within the scope of the present invention. For example, with respect to various implementations discussed above, different programming languages (e.g., C) can be used to stage intermediate processing data into a proprietary data format. Accordingly, many modifications may be made without departing from the scope of the following claims.
This application is a divisional of U.S. patent application Ser. No. 11/621,521, filed Jan. 9, 2007, now U.S. Pat. No. 8,219,518. The aforementioned related patent application is herein incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4813013 | Dunn | Mar 1989 | A |
4901221 | Kodosky et al. | Feb 1990 | A |
5379423 | Mutoh et al. | Jan 1995 | A |
5497500 | Rogers et al. | Mar 1996 | A |
5577253 | Blickstein | Nov 1996 | A |
5586328 | Caron et al. | Dec 1996 | A |
5729746 | Leonard | Mar 1998 | A |
5758160 | McInerney et al. | May 1998 | A |
5850548 | Williams | Dec 1998 | A |
5857180 | Hallmark et al. | Jan 1999 | A |
5920721 | Hunter et al. | Jul 1999 | A |
5940593 | House et al. | Aug 1999 | A |
5966532 | McDonald et al. | Oct 1999 | A |
6014670 | Zamanian et al. | Jan 2000 | A |
6044217 | Brealey et al. | Mar 2000 | A |
6098153 | Fuld et al. | Aug 2000 | A |
6202043 | Devoino et al. | Mar 2001 | B1 |
6208345 | Sheard et al. | Mar 2001 | B1 |
6208990 | Suresh et al. | Mar 2001 | B1 |
6243710 | DeMichiel et al. | Jun 2001 | B1 |
6282699 | Zhang et al. | Aug 2001 | B1 |
6434739 | Branson et al. | Aug 2002 | B1 |
6449619 | Colliat et al. | Sep 2002 | B1 |
6480842 | Agassi et al. | Nov 2002 | B1 |
6604110 | Savage et al. | Aug 2003 | B1 |
6668253 | Thompson et al. | Dec 2003 | B1 |
6738964 | Zink et al. | May 2004 | B1 |
6772409 | Chawla et al. | Aug 2004 | B1 |
6795790 | Lang et al. | Sep 2004 | B1 |
6807651 | Saluja et al. | Oct 2004 | B2 |
6839724 | Manchanda et al. | Jan 2005 | B2 |
6839726 | Kawamoto | Jan 2005 | B2 |
6850925 | Chaudhuri et al. | Feb 2005 | B2 |
6928431 | Dettinger et al. | Aug 2005 | B2 |
6968326 | Johnson et al. | Nov 2005 | B2 |
6968335 | Bayliss et al. | Nov 2005 | B2 |
6978270 | Carty et al. | Dec 2005 | B1 |
7003560 | Mullen et al. | Feb 2006 | B1 |
7010779 | Rubin et al. | Mar 2006 | B2 |
7031987 | Mukkamalla et al. | Apr 2006 | B2 |
7035786 | Abu El Ata et al. | Apr 2006 | B1 |
7076765 | Omori | Jul 2006 | B1 |
7103590 | Murthy et al. | Sep 2006 | B1 |
7191183 | Goldstein | Mar 2007 | B1 |
7209925 | Srinivasan et al. | Apr 2007 | B2 |
7340718 | Szladovics et al. | Mar 2008 | B2 |
7343585 | Lau et al. | Mar 2008 | B1 |
7499917 | Purcell et al. | Mar 2009 | B2 |
7526468 | Vincent et al. | Apr 2009 | B2 |
7689576 | Rao et al. | Mar 2010 | B2 |
7689582 | Behnen et al. | Mar 2010 | B2 |
7739267 | Jin et al. | Jun 2010 | B2 |
7747563 | Gehring | Jun 2010 | B2 |
7860863 | Bar-Or et al. | Dec 2010 | B2 |
7941460 | Bar-Or et al. | May 2011 | B2 |
8230384 | Krishnan et al. | Jul 2012 | B1 |
20020046301 | Shannon et al. | Apr 2002 | A1 |
20020078262 | Harrison et al. | Jun 2002 | A1 |
20020116376 | Iwata et al. | Aug 2002 | A1 |
20020198872 | MacNicol et al. | Dec 2002 | A1 |
20030033437 | Fischer et al. | Feb 2003 | A1 |
20030037322 | Kodosky et al. | Feb 2003 | A1 |
20030051226 | Zimmer et al. | Mar 2003 | A1 |
20030100198 | Hicks et al. | May 2003 | A1 |
20030110470 | Hanson et al. | Jun 2003 | A1 |
20030149556 | Riess | Aug 2003 | A1 |
20030154274 | Nakamura | Aug 2003 | A1 |
20030172059 | Andrei | Sep 2003 | A1 |
20030182651 | Secrist et al. | Sep 2003 | A1 |
20030229639 | Carlson et al. | Dec 2003 | A1 |
20030233374 | Spinola et al. | Dec 2003 | A1 |
20030236788 | Kanellos et al. | Dec 2003 | A1 |
20040054684 | Geels | Mar 2004 | A1 |
20040068479 | Wolfson et al. | Apr 2004 | A1 |
20040107414 | Bronicki et al. | Jun 2004 | A1 |
20040220923 | Nica | Nov 2004 | A1 |
20040254948 | Yao | Dec 2004 | A1 |
20050022157 | Brendle et al. | Jan 2005 | A1 |
20050044527 | Recinto | Feb 2005 | A1 |
20050055257 | Senturk et al. | Mar 2005 | A1 |
20050066283 | Kanamaru | Mar 2005 | A1 |
20050091664 | Cook et al. | Apr 2005 | A1 |
20050091684 | Kawabata et al. | Apr 2005 | A1 |
20050097103 | Zane et al. | May 2005 | A1 |
20050108209 | Beyer et al. | May 2005 | A1 |
20050131881 | Ghosh et al. | Jun 2005 | A1 |
20050137852 | Chari et al. | Jun 2005 | A1 |
20050149914 | Krapf et al. | Jul 2005 | A1 |
20050174986 | Emond et al. | Aug 2005 | A1 |
20050174988 | Bieber et al. | Aug 2005 | A1 |
20050188353 | Hasson et al. | Aug 2005 | A1 |
20050216497 | Kruse et al. | Sep 2005 | A1 |
20050227216 | Gupta | Oct 2005 | A1 |
20050234969 | Mamou et al. | Oct 2005 | A1 |
20050240354 | Mamou et al. | Oct 2005 | A1 |
20050240652 | Crick | Oct 2005 | A1 |
20050243604 | Harken et al. | Nov 2005 | A1 |
20050256892 | Harken | Nov 2005 | A1 |
20050283473 | Rousso et al. | Dec 2005 | A1 |
20060004863 | Chan et al. | Jan 2006 | A1 |
20060015380 | Flinn et al. | Jan 2006 | A1 |
20060036522 | Perham | Feb 2006 | A1 |
20060047709 | Belin et al. | Mar 2006 | A1 |
20060066257 | Chou | Mar 2006 | A1 |
20060074621 | Rachman | Apr 2006 | A1 |
20060074730 | Shukla et al. | Apr 2006 | A1 |
20060101011 | Lindsay et al. | May 2006 | A1 |
20060112109 | Chowdhary et al. | May 2006 | A1 |
20060123067 | Ghattu et al. | Jun 2006 | A1 |
20060167865 | Andrei | Jul 2006 | A1 |
20060174225 | Bennett et al. | Aug 2006 | A1 |
20060206869 | Lewis et al. | Sep 2006 | A1 |
20060212475 | Cheng | Sep 2006 | A1 |
20060218123 | Chowdhuri et al. | Sep 2006 | A1 |
20060228654 | Sanjar et al. | Oct 2006 | A1 |
20070061305 | Azizi | Mar 2007 | A1 |
20070078812 | Waingold et al. | Apr 2007 | A1 |
20070157191 | Seeger et al. | Jul 2007 | A1 |
20070169040 | Chen | Jul 2007 | A1 |
20070203893 | Krinsky et al. | Aug 2007 | A1 |
20070214111 | Jin et al. | Sep 2007 | A1 |
20070214171 | Behnen et al. | Sep 2007 | A1 |
20070214176 | Rao et al. | Sep 2007 | A1 |
20070244876 | Jin et al. | Oct 2007 | A1 |
20080092112 | Jin et al. | Apr 2008 | A1 |
20080147703 | Behnen et al. | Jun 2008 | A1 |
20080147707 | Jin et al. | Jun 2008 | A1 |
20080168082 | Jin et al. | Jul 2008 | A1 |
Entry |
---|
Simitsis, Alkis, Mapping Conceptual to Logical Models for ETL Processes, Proceedings of the 8th ACM international workshop on Data warehousing and OLAP, 2005, pp. 67-76, ACM, New York, New York, United States. |
Ives, Zachary E. et al, An Adaptive Query Execution System for Data Integration, Jun. 1999, pp. 299-310, vol. 28, Issue 2, ACM, New York, New York, United States. |
Carreira, Paulo et al., Data Mapper: An Operator for Expressing One-to-Many Data Transformations, Data Warehousing and Knowledge Discovery, 2005, pp. 136-145, NY, NY, United States. |
Konstantinides, Konstantinos et al., The Khoros Software Development Environment for Image and Signal Processing (Abstract and Introduction), 1992, Hewlett-Packard Laboratories, Palo Alto, CA, United States. |
Arkusinski, Andy et al., A Software Port From a Standalone Communications Management Unit to an Integrated Platform, Digital Avionics Systems Conference, Oct. 2002, pp. 6B3-1-6B3-9, vol. 1, IEEE Computer Society, Washington, D.C., United States. |
Carreira, Paulo et al., Execution of Data Mappers, Proceedings of the 2004 international workshop on Information quality in information systems, 2004, pp. 2-9, ACM, New York, New York, United States. |
Ferguson, Warren D. et al., Platform Independent Translations for a Compilable Ada Abstract Syntax, Proceedings of the conference on TRI-Ada '93, 1993, pp. 312-322, ACM, New York, New York, United States. |
Friedrich II, John R., Meta-Data Version and Configuration Management in Multi-Vendor Environments, Proceedings of the 2005 ACM SIGMOD international conference on Management of data, 2005, pp. 799-804, ACM, New York, New York, United States. |
Gurd, J. R. et al., The Manchester Prototype Dataflow Computer, Communications of the ACM, Jan. 1985, pp. 34-52, vol. 28, Issue 1, ACM, New York, New York, United States. |
Haas, Laura M. et al., Clio Grows Up: From Research Prototype to Industrial Tool, Proceedings of the 2005 ACM SIGMOD international conference on Management of data, 2005, pp. 805-810, ACM, New York, New York, United States. |
Hernandez, Mauricio et al., Clio: A Schema Mapping Tool for Information Integration, 8th International Symposium on Parallel Architectures, Algorithms and Networks, Dec. 2005, p. 1, IEEE Computer Society, Washington, D.C., United States. |
Zhao, Wei et al., Automated Glue/Wrapper Code Generation in Integration of Distributed and Heterogeneous Software Components, Proceedings of the Enterprise Distributed Object Computing Conference, 2004, pp. 275-285, IEEE Computer Society, Washington, D.C., United States. |
Yu, Tsae-Feng, Transform Merging of ETL Data Flow Plan, Proceedings of the International Conference on Information and Knowledge Engineering, Jun. 2003, pp. 193-198, CSREA Press, Las Vegas, NV, United States. |
Werner, Sebastian et al., Just-in-sequence material supply—a simulation based solution in electronics, Robotics and Computer-Integrated Manufacturing: An international journal of manufacturing and product and process development, 2003, pp. 107-111, vol. 19, No.'s 1-2, Pergamon Press, Amsterdam, The Netherlands. |
Vassiliadis, Panos et al., A generic and Customizable framework for the design of ETL scenarios, Information Systems—Special issue: The 15th international conference on advanced information systems engineering, Nov. 2005, pp. 492-525, vol. 30, Issue 7, Elsevier Science Ltd., Oxford, UK, UK. |
Stewart, Don et al., Dynamic Applications from the Ground Up, Proceedings of the 2005 ACM SIGPLAN workshop on Haskell, 2005, pp. 27-38, ACM, New York, New York, United States. |
Rifaieh, Rami et al., Query-based Data Warehousing Tool, Proceedings of the 5th ACM international workshop on Data Warehousing and OLAP, 2002, pp. 35-42, ACM, New York, New York, United States. |
Ramu, RN, Method for Initializing a Plateform and Code Independent Library, IBM Technical Disclosure Bulletin, Sep. 1994, pp. 637-638, vol. 37, No. 9, International Business Machines Corporation, Armonk, NY, United States. |
Poess, Meikel et al., TPC-DS, Taking Decision Support Benchmarking to the Next Level, Proceedings of the 2002 ACM Sigmod international conference on Management of data, 2002, pp. 582-587, ACM, New York, New York, United States. |
Jardim-Goncalves, Ricardo et al, Integration and adoptability of APs: the role of ISO TC184/SC4 standards, International Journal of Computer Applications in Technology, 2003, pp. 105-116, vol. 18, Issues 1-4, Inderscience Publishers, Geneva, Switzerland. |
Number | Date | Country | |
---|---|---|---|
20120271865 A1 | Oct 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11621521 | Jan 2007 | US |
Child | 13523217 | US |