1. Field of Invention
The present invention relates generally to multi-dimensional relational databases and, more specifically to mechanisms for aggregating data elements in a multi-dimensional relational database system and for processing queries on such aggregated data elements, and also to informational database systems that utilize multi-dimensional relational databases and such aggregation/query mechanisms.
2. Brief Description of the State of the Art
Information technology (IT) enables an enterprise to manage and optimize its internal business practices through the analysis and sharing of data internally within the enterprise. In addition, IT enables an enterprise to manage and optimize its external business practices through the sharing of data with external parties such as suppliers, customers and investors, and through on-line transactions between the enterprise and external parties. Informational database systems (systems that store data, support query processing on the stored data, and possibly support analysis of the stored data) play a central role in many different parts of today's IT systems.
The Relational OLAP (ROLAP) system accesses data stored in a Data Warehouse to provide OLAP analyses. The premise of ROLAP is that OLAP capabilities are best provided directly against the relational database, i.e. the Data Warehouse. The ROLAP architecture was invented to enable direct access of data from Data Warehouses, and therefore support optimization techniques to meet batch window requirements and provide fast response times. Typically, these optimization techniques include application-level table partitioning, pre-aggregate inferencing, denormalization support, and the joining of multiple fact tables.
A typical ROLAP system has a three-tier or layer client/server architecture. The “database layer” utilizes relational databases for data storage, access, and retrieval processes. The “application logic layer” is the ROLAP engine which executes the multidimensional reports from multiple users. The ROLAP engine integrates with a variety of “presentation layers,” through which users perform OLAP analyses. After the data model for the data warehouse is defined, data from on-line transaction-processing (OLTP) systems is loaded into the relational database management system (RDBMS). If required by the data model, database routines are run to pre-aggregate the data within the RDBMS. Indices are then created to optimize query access times. End users submit multidimensional analyses to the ROLAP engine, which then dynamically transforms the requests into SQL execution plans. The SQL execution plans are submitted to the relational database for processing, the relational query results are cross-tabulated, and a multidimensional result data set is returned to the end user. ROLAP is a fully dynamic architecture capable of utilizing pre-calculated results when they are available, or dynamically generating results from the raw information when necessary.
The Multidimensional OLAP (MOLAP) systems utilize a MDD or “cube” to provide OLAP analyses. The main premise of this architecture is that data must be stored multidimensionally to be accessed and viewed multidimensionally. Such non-relational MDD data structures typically can be queried by users to enable the users to “slice and dice” the aggregated data. As shown in
A more detailed description of the data warehouse and OLAP environment may be found in copending U.S. patent application Ser. No. 09/514,611 to R. Bakalash, G. Shaked, and J. Caspi, commonly assigned to HyperRoll Israel, Limited, incorporated by reference above in its entirety.
In a RDBMS, users view data stored in tables. By contrast, users of a non-relation database system can view other data structures, either instead of or in addition to the tables of the RDBMS system.
The choice of using a RDBMS as the data repository in information database systems naturally stems from the realities of SQL standardization, the wealth of RDBMS-related tools, and readily available expertise in RDBMS systems. However, the querying component of RDBMS technology suffers from performance and optimization problems stemming from the very nature of the relational data model. More specifically, during query processing, the relational data model requires a mechanism that locates the raw data elements that match the query. Moreover, to support queries that involve aggregation operations, such aggregation operations must be performed over the raw data elements that match the query. For large multi-dimensional databases, a naive implementation of these operations involves computational intensive table scans that leads to unacceptable query response times.
In order to better understand how the prior art has approached this problem, it will be helpful to briefly describe the relational database model. According to the relational database model, a relational database is represented by a logical schema and tables that implement the schema. The logical schema is represented by a set of templates that define one or more dimensions (entities) and attributes associated with a given dimension. The attributes associated with a given dimension includes one or more attributes that distinguish it from every other dimension in the database (a dimension identifier). Relationships amongst dimensions are formed by joining attributes. The data structure that represents the set of templates and relations of the logical schema is typically referred to as a catalog or dictionary. Note that the logical schema represents the relational organization of the database, but does not hold any fact data per se. This fact data is stored in tables that implement the logical schema.
Star schemas are frequently used to represent the logical structure of a relational database. The basic premise of star schemas is that information can be classified into two groups: facts and dimensions. Facts are the core data elements being analyzed. For example, units of individual item sold are facts, while dimensions are attributes about the facts. For example, dimensions are the product types purchased and the data purchase. Business questions against this schema are asked looking up specific facts (UNITS) through a set of dimensions (MARKETS, PRODUCTS, PERIOD). The central fact table is typically much larger than any of its dimension tables.
An exemplary star schema is illustrated in
When processing a query, the tables that implement the schema are accessed to retrieve the facts that match the query. For example, in a star schema implementation as described above, the facts are retrieved from the central fact table and/or the dimension tables. Locating the facts that match a given query involves one or more join operations. Moreover, to support queries that involve aggregation operations, such aggregation operations must be performed over the facts that match the query. For large multi-dimensional databases, a naive implementation of these operations involves computational intensive table scans that typically leads to unacceptable query response times. Moreover, since the fact tables are pre-summarized and aggregated along business dimensions, these tables tend to be very large. This point becomes an important consideration of the performance issues associated with star schemas. A more detailed discussion of the performance issues (and proposed approaches that address such issues) related to joining and aggregation of star schema is now set forth.
The first performance issue arises from computationally intensive table scans that are performed by a naive implementation of data joining. Indexing schemes may be used to bypass these scans when performing joining operations. Such schemes include B-tree indexing, inverted list indexing and aggregate indexing. A more detailed description of such indexing schemes can be found in “The Art of Indexing”, Dynamic Information Systems Corporation, October 1999, available at http://www.disc.com/artindex.pdf. All of these indexing schemes replaces table scan operations (involved in locating the data elements that match a query) with one ore more index lookup operation. Inverted list indexing associates an index with a group of data elements, and stores (at a location identified by the index) a group of pointers to the associated data elements. During query processing, in the event that the query matches the index, the pointers stored in the index are used to retrieve the corresponding data elements pointed therefrom. Aggregation indexing integrates an aggregation index with an inverted list index to provide pointers to raw data elements that require aggregation, thereby providing for dynamic summarization of the raw data elements that match the user-submitted query.
These indexing schemes are intended to improve join operations by replacing table scan operations with one or more index lookup operation in order to locate the data elements that match a query. However, these indexing schemes suffer from various performance issues as follows:
Another performance issue arises from dimension tables that contain multiple hierarchies. In such cases, the dimensional table often includes a level of hierarchy indicator for every record. Every retrieval from fact table that stores details and aggregates must use the indicator to obtain the correct result, which impacts performance. The best alternative to using the level indicator is the snowflake schema. In this schema aggregate tables are created separately from the detail tables. In addition to the main fact tables, snowflake schema contains separate fact tables for each level of aggregation. Notably, the snowflake schema is even more complicated than a star schema, and often requires multiple SQL statements to get the results that are required.
Another performance issue arises from the pairwise join problem. Traditional RDBMS engines are not design for the rich set of complex queries that are issued against a star schema. The need to retrieve related information from several tables in a single query—“join processing” —is severely limited. Many RDBMSs can join only two tables at a time. If a complex join involves more than two tables, the RDBMS needs to break the query into a series of pairwise joins. Selecting the order of these joins has a dramatic performance impact. There are optimizers that spend a lot of CPU cycles to find the best order in which to execute those joins. Unfortunately, because the number of combinations to be evaluated grows exponentially with the number of tables being joined, the problem of selecting the best order of pairwise joins rarely can be solved in a reasonable amount of time.
Moreover, because the number of combinations is often too large, optimizers limit the selection on the basis of a criterion of directly related tables. In a star schema, the fact table is the only table directly related to most other tables, meaning that the fact table is a natural candidate for the first pairwise join. Unfortunately, the fact table is the very largest table in the query, so this strategy leads to selecting a pairwise join order that generates a very large intermediate result set, severely affecting query performance.
This is an optimization strategy, typically referred to as Cartesian Joins, that lessens the performance impact of the pairwise join problem by allowing joining of unrelated tables. The join to the fact table, which is the largest one, is deferred until the very end, thus reducing the size of intermediate result sets. In a join of two unrelated tables every combination of the two tables' rows is produced, a Cartesian product. Such a Cartesian product improves query performance. However, this strategy is viable only if the Cartesian product of dimension rows selected is much smaller than the number of rows in the fact table. The multiplicative nature of the Cartesian join makes the optimization helpful only for relatively small databases.
In addition, systems that exploit hardware and software parallelism have been developed that lessens the performance issues set forth above. Parallelism can help reduce the execution time of a single query (speed-up), or handle additional work without degrading execution time (scale-up).). For example, Red Brick™ has developed STARjoin™ technology that provides high speed, parallelizable multi-table joins in a single pass, thus allowing more than two tables can be joined in a single operation. The core technology is an innovative approach to indexing that accelerates multiple joins. Unfortunately, parallelism can only reduce, not eliminate, the performance degradation issues related to the star schema.
One of the most fundamental principles of the multidimensional database is the idea of aggregation. The most common aggregation is called a roll-up aggregation. This type is relatively easy to compute: e.g. taking daily sales totals and rolling them up into a monthly sales table. The more difficult are analytical calculations, the aggregation of Boolean and comparative operators. However these are also considered as a subset of aggregation.
In a star schema, the results of aggregation are summary tables. Typically, summary tables are generated by database administrators who attempt to anticipate the data aggregations that the users will request, and then pre-build such tables. In such systems, when processing a user-generated query that involves aggregation operations, the pre-built aggregated data that matches the query is retrieved from the summary tables (if such data exists).
Summary tables containing pre-aggregated results typically provide for improved query response time with respect to on-the-fly aggregation. However, summary tables suffer from some disadvantages:
Note that in the event that the aggregated data does not exist in the summary tables, table join operations and aggregation operations are performed over the raw facts in order to generate such aggregated data. This is typically referred to as on-the-fly aggregation. In such instances, aggregation indexing is used to mitigate the performance of multiple data joins associated with dynamic aggregation of the raw data. Thus, in large multi-dimensional databases, such dynamic aggregation may lead to unacceptable query response times.
In view of the problems associated with joining and aggregation within RDBMS, prior art ROLAP systems have suffered from essentially the same shortcomings and drawbacks of their underlying RDBMS.
While prior art MOLAP systems provide for improved access time to aggregated data within their underlying MDD structures, and have performance advantages when carrying out joining and aggregations operations, prior art MOLAP architectures have suffered from a number of shortcomings and drawbacks which Applicants have detailed in their copending U.S. application Ser. Nos. 09/368,241 and 09/514,611 incorporated herein by reference.
In summary, such shortcomings and drawbacks stem from the fact that there is unidirectional data flow from the RDBMS to the MOLAP system. More specifically, atomic (raw) data is moved, in a single transfer, to the MOLAP system for aggregation, analysis and querying. Importantly, the aggregation results are external to the RDBMS. Thus, users of the RDBMS cannot directly view these results. Such results are accessible only from the MOLAP system. Because the MDD query processing logic in prior art MOLAP systems is separate from that of the RDBMS, users must procure rights to access to the MOLAP system and be instructed (and be careful to conform to such instructions) to access the MDD (or the RDBMS) under certain conditions. Such requirements can present security issues, highly undesirable for system administration. Satisfying such requirements is a costly and logistically cumbersome process. As a result, the widespread applicability of MOLAP systems has been limited.
Thus, there is a great need in the art for an improved mechanism for joining and aggregating data elements within a relational database management system, and for integrating the improved relational database management system into informational database systems (including the data warehouse and OLAP domains), while avoiding the shortcomings and drawbacks of prior art systems and methodologies.
Accordingly, it is an object of the present invention to provide an improved method of and system for joining and aggregating data elements integrated within a relational database management system (RDBMS) using a non-relational multi-dimensional data structure (MDD), achieving a significant increase in system performance (e.g. deceased access/search time), user flexibility and ease of use.
Another object of the present invention is to provide such an RDBMS wherein its integrated data aggregation module supports high-performance aggregation (i.e. data roll-up) processes to maximize query performance of large data volumes.
Another object of the present invention is to provide such an RDBMS system, wherein its integrated data aggregation (i.e. roll-up) module speeds up the aggregation process by orders of magnitude, enabling larger database analysis by lowering the aggregation times.
Another object of the present invention is to provide such a novel RDBMS system for use in OLAP operations.
Another object of the present invention is to provide a novel RDBMS system having an integrated aggregation module that carries out an novel rollup (i.e. down-up) and spread down (i.e. top-down) aggregation algorithms.
Another object of the present invention is to provide a novel RDBMS system having an integrated aggregation module that carries out full pre-aggregation and/or “on-the-fly” aggregation processes.
Another object of the present invention is to provide a novel RDBMS system having an integrated aggregation module which is capable of supporting a MDD having a multi-hierarchy dimensionality.
These and other object of the present invention will become apparent hereinafter and in the Claims to Invention set forth herein.
In order to more fully appreciate the objects of the present invention, the following Detailed Description of the Illustrative Embodiments should be read in conjunction with the accompanying Drawings, wherein:
FIGS. 6C1 and 6C2, taken together, set forth a flow chart representation of the primary operations carried out within the RDBMS of the present invention when performing data aggregation and related support operations, including the servicing of user-submitted (e.g. natural language) queries made on such aggregated database of the present invention.
FIG. 9C1 is a schematic representation of the Query Directed Roll-up (QDR) aggregation method/procedure of the present invention, showing data aggregation starting from existing basic data or previously aggregated data in the first dimension (D1), and such aggregated data being utilized as a basis for QDR aggregation along the second dimension (D2).
FIG. 9C2 is a schematic representation of the Query Directed Roll-up (QDR) aggregation method/procedure of the present invention, showing initial data aggregation starting from existing previously aggregated data in the second third (D3), and continuing along the third dimension (D3), and thereafter continuing aggregation along the second dimension (D2).
Referring now to
Through this document, the term “aggregation” and “pre-aggregation” shall be understood to mean the process of summation of numbers, as well as other mathematical operations, such as multiplication, subtraction, division etc. It shall be understood that pre-aggregation operations occur asynchronously with respect to the traditional query processing operations. Moreover, the term “atomic data” shall be understood to refer to the lowest level of data granularity required for effective decision making. In the case of a retail merchandising manager, atomic data may refer to information by store, by day, and by item. For a banker, atomic data may be information by account, by transaction, and by branch.
In general, the improved RDBMS system of the present invention excels in performing two distinct functions, namely: the aggregation of data; and the handling of the resulting data for “on demand” client use. Moreover, because of improved data aggregation capabilities, the RDBMS of the present invention can be employed in a wide range of applications, including Data Warehouses supporting OLAP systems and the like. For purposes of illustration, initial focus will be accorded to the RDMS of the present invention.
During operation, the base data originates from the fact table(s) of the RDBMS. The core data aggregation operations are performed by the Aggregation Engine; a Multidimensional Data Handler; and a Multidimensional Data Storage. The results of data aggregation are efficiently stored in a multidimensional data storage (MDDB), by the Data Handler. The SQL handler of the MDD Aggregation module services user-submitted queries (in the preferred embodiment, SQL query statements) forwarded from the query handler of the RDBMS. The SQL handler of the MDD Aggregation module may communicate with the query handler of the RDBMS over a standard interface (such as OLDB, OLE-DB, ODBC, SQL, API, JDBC, etc.). In this case, the support mechanisms of the RDBMS and SQL handler include components that provide communication of such data over these standard interfaces. Such interface components are well known in the art. Aggregation (or drill down results) are retrieved on demand and returned to the user.
Typically, a user interacts with a client machine (for example, using a web-enabled browser) to generate a natural language query, that is communicated to the query interface of the RDBMS, for example over a network as shown. The query interface disintegrates the query, via parsing, into a series of requests (in the preferred embodiment, SQL statements) that are communicated to the query handler of the RDBMS. It should be noted that the functions of the query interface may be implemented in a module that is not part of the RDBMS (for example, in the client machine). The query handler of the RDBMS forwards requests that involve data stored in the MDD of the MDD Aggregation module to the SQL handler of the MDD Aggregation module for servicing. Each request specifies a set of n-dimensions. The SQL handler of the MDD Aggregation Module extracts this set of dimensions and operates cooperatively with the MDD handler to address the MDDB using the set of dimensions, retrieve the addressed data from the MDDB, and return the results to the user via the query handler of the RDBMS.
FIG. 6C(i) and 6C(ii) is FIGS. 6C1 and 6C2 set forth a flow chart illustrating the operations of an illustrative RDBMS of the present invention. In step 601, the base data loader of the MDD Aggregation Module loads the dictionary (or catalog) from the meta-data store in the RDBMS. In performing this function, the base data loader may utilize an adapter (interface) that maps the data types of the dictionary of the RDBMS (or that maps a standard data type used to represent the dictionary of the RDBMS) into the data types used in the MDD aggregation module. In addition, the base data loader extracts the dimensions from the dictionary and forwards the dimensions to the aggregation engine of the MDD Aggregation Module.
In step 603, the base data loader loads the fact table(s) from the RDBMS. In performing this function, the base data loader may utilize an adapter (interface) that maps the data types of the fact table(s) of the RDBMS (or that maps a standard data type used to represent the fact table(s) of the RDBMS) into the data types used in the MDD Aggregation Module. In addition, the base data loader extracts the atomic data from the fact table, and forwards the atomic data to the aggregation engine.
In step 605, the aggregation engine rolls-up (aggregates) the atomic data (provided by the base data loader in step 603) along at least one of the dimensions and operates cooperatively with the MDD handler to store the resultant aggregated data in the MDD database. A more detailed description of exemplary aggregation operations according to a preferred embodiment of the present invention is set forth below with respect to the QDR process of
In step 607, a reference is defined that provides users with the ability to query the data generated by the MDD Aggregation Module and/or stored in the MDDB of the MDD Aggregation Module. This reference is preferably defined using the Create View SQL statement, which allows the user to: i) define a table name (TN) associated with the MDD database stored in the MDD Aggregation Module, and ii) define a link used to route SQL statements on the table TN to the MDD Aggregation Module. In this embodiment, the view mechanism of the RDBMS enables reference and linking to the data stored in the MDDB of the MDD Aggregation Engine as illustrated in
In step 609, a user interacts with a client machine to generate a query, and the query is communicated to the query interface. The query interface generate one or more SQL statements on the reference defined in step 607 (this reference refers to the data stored in the MDDB of the MDD Aggregation Module), and forwards the SQL statement(s) to the query handler of the RDBMS.
In step 611, the query handler receives the SQL statement(s); and optionally transforms such SQL statement(s) to optimize the SQL statement (s) for more efficient query handling. Such transformations are well known in the art. For example, see Kimball, “Aggregation Navigation With (Almost) No MetaData”, DBMS Data Warehouse Supplement, August 1996, available at http://www.dbmsmag.com/9608d54.html.
In step 613: the query handler determines whether the received SQL statement(s) [or transformed SQL statement(s)] is on the reference generated in step 607. If so, operation continues to step 615; otherwise normal query handling operations continue is step 625
In step 615, the received SQL statement(s) [or transformed SQL statement(s)] is routed to the MDD aggregation engine for processing in step 617 using the link for the reference as described above with respect to step 607.
In step 617, the SQL statement(s) is received by the SQL handler of the MDD Aggregation Module, wherein a set of one or more N-dimensional coordinates are extracted from the SQL statement. In performing this function, SQL handler may utilize an adapter (interface) that maps the data types of the SQL statement issued by query handler of the RDBMS (or that maps a standard data type used to represent the SQL statement issued by query handler of the RDBMS) into the data types used in the MDD aggregation module.
In step 619, the set of N-dimensional coordinates extracted in step 617 are used by the MDD handler to address the MDDB and retrieve the corresponding data from the MDDB.
Finally, in step 621, the retrieved data is returned to the user via the RDBMS (for example, by forwarding the retrieved data to the SQL handler, which returns the retrieved data to the query handler of the RDBMS system, which returns the results of the user-submitted query to the user via the client machine), and the operation ends.
It should be noted that the facts (base data), as it arrives from RDBMS, may be analyzed and reordered to optimize hierarchy handling, according to the unique method of the present invention, as described later with reference to
Moreover, the MDD control module of the MDD Aggregation Module preferably administers the aggregation process according to the method illustrated in
The SQL handling mechanism shown in
Preferably, the MDD aggregation module of the RDBMS of the present invention supports a segmented data aggregation method as described in FIGS. 9A through 9C2. These figures outline a simplified setting of three dimensions only; however, the following analysis applies to any number of dimensions as well.
The data is being divided into autonomic segments to minimize the amount of simultaneously handled data. The initial aggregation is practiced on a single dimension only, while later on the aggregation process involves all other dimensions.
At the first stage of the aggregation method, an aggregation is performed along dimension 1. The first stage can be performed on more than one dimension. As shown in
In the next stage shown in
The principle of data segmentation can be applied on the first stage as well. However, only a large enough data set will justify such a sliced procedure in the first dimension. Actually, it is possible to consider each segment as an N−1 cube, enabling recursive computation.
It is imperative to get aggregation results of a specific slice before the entire aggregation is completed, or alternatively, to have the roll-up done in a particular sequence. This novel feature of the aggregation method of the present invention is that it allows the querying to begin, even before the regular aggregation process is accomplished, and still having fast response. Moreover, in relational OLAP and other systems requiring only partial aggregations, the QDR process dramatically speeds up the query response.
The QDR process is made feasible by the slice-oriented roll-up method of the present invention. After aggregating the first dimension(s), the multidimensional space is composed of independent multidimensional cubes (slices). These cubes can be processed in any arbitrary sequence.
Consequently the aggregation process of the present invention can be monitored by means of files, shared memory sockets, or queues to statically or dynamically set the roll-up order.
In order to satisfy a single query , before the required aggregation result has been prepared, the QDR process of the present invention involves performing a fast on-the-fly aggregation (roll-up) involving only a thin slice of the multidimensional data.
FIG. 9C1 shows a slice required for building-up a roll-up result of the 2nd dimension. In case 1, as shown, the aggregation starts from an existing data, either basic or previously aggregated in the first dimension. This data is utilized as a basis for QDR aggregation along the second dimension. In case 2, due to lack of previous data, a QDR involves an initial slice aggregation along dimension 3, and thereafter aggregation along the 2nd dimension.
FIG. 9C2 shows two corresponding QDR cases for gaining results in the 3d dimension. Cases 1 and 2 differ in the amount of initial aggregation required in 2nd dimension.
A search for a queried data point is then performed by an access to the DIR file. The search along the file can be made using a simple binary search due to file's ascending order. When the record is found, it is then loaded into main memory to search for the required point, characterized by its index IND k. The attached Data field represents the queried value. In case the exact index is not found, it means that the point is a NA.
Notably, when using prior art techniques, multiple handling of data elements, which occurs when a data element is accessed more than once during aggregation process, has been hitherto unavoidable when the main concern is to effectively handle the sparse data. The data structures used in prior art data handling methods have been designed for fast access to a available data (not NA data). According to prior art techniques, each access is associated with a timely search and retrieval in the data structure. For the massive amount of data typically accessed from a Data Warehouse in an OLAP application, such multiple handling of data elements has significantly degraded the efficiency of prior art data aggregation processes. When using prior art data handling techniques, the data element D shown in
In accordance with the present invention, the MDD aggregation module of the RDBMS performs the loading of base data and the aggregation and storage of the aggregated data in a way that limits the access of to a singular occurrence, as opposed to multiple occurrences as taught by prior art methods. According to the present invention, elements of base data and their aggregated results are contiguously stored in a way that each element will be accessed only once. This particular order allows a forward-only handling, never backward. Once a base data element is stored, or aggregated result is generated and stored, it is never to be retrieved again for further aggregation. As a result the storage access is minimized. This way of singular handling greatly elevates the aggregation efficiency of large data bases. The data element D, as any other element, is accessed and handled only once.
Functional Advantages Gained by the Improved RDBMS of the Present Invention
The features of the RDBMS of the present invention, provides for dramatically improved response time in handling queries issued to the RDBMS that involve aggregation, thus enabling enterprise-wide centralized aggregation. Moreover, in the preferred embodiment of the present invention, users can query the aggregated data in an manner no different than traditional queries on an RDBMS.
The method of Segmented Aggregation employed by the novel RDBMS of the present invention provides flexibility, scalability, the capability of Query Directed Aggregation, and speed improvement.
Moreover, the method of Query Directed Aggregation (QDR) employed by the novel RDBMS of the present invention minimizes the data handling operations in multi-hierarchy data structures, eliminates the need to wait for full aggregation to be complete, and provides for build-up of aggregated data required for full aggregation.
It is understood that the System and Method of the illustrative embodiments described herein above may be modified in a variety of ways which will become readily apparent to those skilled in the art of having the benefit of the novel teachings disclosed herein. All such modifications and variations of the illustrative embodiments thereof shall be deemed to be within the scope and spirit of the present invention as defined by the Claims to Invention appended hereto.
This is a Continuation of application Ser. No. 10/136,937 filed May 1, 2002, now abandoned which is a Continuation of application Ser. No. 09/634,748 filed Aug. 9, 2000, now U.S. Pat. No. 6,385,604, which is a Continuation-in-part of: application Ser. No. 09/514,611 filed Feb. 28, 2000, now U.S. Pat. No. 6,434,544, and application Ser. No. 09/368,241 filed Aug. 4, 1999, now U.S. Pat. No. 6,408,292; said Applications being commonly owned by HyperRoll Israel, Limited, herein incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4868733 | Fujisawa et al. | Sep 1989 | A |
4985856 | Kaufman et al. | Jan 1991 | A |
5095427 | Tanaka et al. | Mar 1992 | A |
5202985 | Goyal | Apr 1993 | A |
5257365 | Powers et al. | Oct 1993 | A |
5293615 | Amada | Mar 1994 | A |
5307484 | Baker et al. | Apr 1994 | A |
5359724 | Earle | Oct 1994 | A |
5379419 | Heffernan et al. | Jan 1995 | A |
5404506 | Fujisawa et al. | Apr 1995 | A |
5410693 | Yu et al. | Apr 1995 | A |
5519859 | Grace | May 1996 | A |
5553226 | Kiuchi et al. | Sep 1996 | A |
5555408 | Fujisawa et al. | Sep 1996 | A |
5696916 | Yamazaki et al. | Dec 1997 | A |
5706495 | Chadha et al. | Jan 1998 | A |
5706503 | Poppen et al. | Jan 1998 | A |
5721910 | Unger et al. | Feb 1998 | A |
5742806 | Reiner et al. | Apr 1998 | A |
5745764 | Leach et al. | Apr 1998 | A |
5765028 | Gladden | Jun 1998 | A |
5767854 | Anwar | Jun 1998 | A |
5781896 | Dalal | Jul 1998 | A |
5794228 | French et al. | Aug 1998 | A |
5794229 | French et al. | Aug 1998 | A |
5794246 | Sankaran et al. | Aug 1998 | A |
5799300 | Agrawal et al. | Aug 1998 | A |
5805885 | Leach et al. | Sep 1998 | A |
5822751 | Gray et al. | Oct 1998 | A |
5832475 | Agrawal et al. | Nov 1998 | A |
5850547 | Waddington et al. | Dec 1998 | A |
5852819 | Beller | Dec 1998 | A |
5852821 | Chen et al. | Dec 1998 | A |
5864857 | Ohata et al. | Jan 1999 | A |
5867501 | Horst et al. | Feb 1999 | A |
5884299 | Ramesh et al. | Mar 1999 | A |
5890151 | Agrawal et al. | Mar 1999 | A |
5890154 | Hsiao et al. | Mar 1999 | A |
5905985 | Malloy et al. | May 1999 | A |
5918225 | White et al. | Jun 1999 | A |
5918232 | Pouschine et al. | Jun 1999 | A |
5926818 | Malloy | Jul 1999 | A |
5926820 | Agrawal et al. | Jul 1999 | A |
5940818 | Malloy et al. | Aug 1999 | A |
5940822 | Haderle et al. | Aug 1999 | A |
5943668 | Malloy et al. | Aug 1999 | A |
5943677 | Hicks | Aug 1999 | A |
5946692 | Faloutsos et al. | Aug 1999 | A |
5946711 | Donnelly | Aug 1999 | A |
5963936 | Cochrane et al. | Oct 1999 | A |
5974416 | Anand et al. | Oct 1999 | A |
5978788 | Castelli et al. | Nov 1999 | A |
5978796 | Malloy et al. | Nov 1999 | A |
5987467 | Ross et al. | Nov 1999 | A |
5990892 | Urbain | Nov 1999 | A |
5991754 | Raitto et al. | Nov 1999 | A |
5999192 | Selfridge et al. | Dec 1999 | A |
6003024 | Bair et al. | Dec 1999 | A |
6003029 | Agrawal et al. | Dec 1999 | A |
6003036 | Martin | Dec 1999 | A |
6006216 | Griffin et al. | Dec 1999 | A |
6009432 | Tarin | Dec 1999 | A |
6014670 | Zamanian et al. | Jan 2000 | A |
6023695 | Osborn et al. | Feb 2000 | A |
6034697 | Becker | Mar 2000 | A |
6064999 | Dalal | May 2000 | A |
6073140 | Morgan et al. | Jun 2000 | A |
6078918 | Allen et al. | Jun 2000 | A |
6078924 | Ainsbury et al. | Jun 2000 | A |
6078994 | Carey | Jun 2000 | A |
6094651 | Agrawal et al. | Jul 2000 | A |
6108647 | Poosala et al. | Aug 2000 | A |
6115705 | Larson | Sep 2000 | A |
6115714 | Gallagher et al. | Sep 2000 | A |
6122628 | Castelli et al. | Sep 2000 | A |
6122636 | Malloy et al. | Sep 2000 | A |
6125624 | Prociw | Oct 2000 | A |
6134541 | Castelli et al. | Oct 2000 | A |
6141655 | Johnson et al. | Oct 2000 | A |
6151584 | Papierniak et al. | Nov 2000 | A |
6151601 | Papierniak et al. | Nov 2000 | A |
6154766 | Yost et al. | Nov 2000 | A |
6161103 | Rauer et al. | Dec 2000 | A |
6163774 | Lore et al. | Dec 2000 | A |
6167396 | Lokken | Dec 2000 | A |
6173310 | Yost et al. | Jan 2001 | B1 |
6182061 | Matsuzawa et al. | Jan 2001 | B1 |
6182062 | Fujisawa et al. | Jan 2001 | B1 |
6189004 | Rassen et al. | Feb 2001 | B1 |
6199063 | Colby et al. | Mar 2001 | B1 |
6208975 | Bull et al. | Mar 2001 | B1 |
6209036 | Aldred et al. | Mar 2001 | B1 |
6212515 | Rogers | Apr 2001 | B1 |
6212524 | Weissman et al. | Apr 2001 | B1 |
6219654 | Ruffin | Apr 2001 | B1 |
6223573 | Grewal et al. | May 2001 | B1 |
6226647 | Venkatasubramanian et al. | May 2001 | B1 |
6256676 | Taylor et al. | Jul 2001 | B1 |
6260050 | Yost et al. | Jul 2001 | B1 |
6269393 | Yost et al. | Jul 2001 | B1 |
6275818 | Subramanian et al. | Aug 2001 | B1 |
6282544 | Tse et al. | Aug 2001 | B1 |
6285994 | Bui et al. | Sep 2001 | B1 |
6289334 | Reiner et al. | Sep 2001 | B1 |
6289352 | Proctor | Sep 2001 | B1 |
6301579 | Becker | Oct 2001 | B1 |
6317750 | Tortolani et al. | Nov 2001 | B1 |
6321206 | Honarvar | Nov 2001 | B1 |
6324623 | Carey | Nov 2001 | B1 |
6332130 | Notani et al. | Dec 2001 | B1 |
6339775 | Zamanian et al. | Jan 2002 | B1 |
6356900 | Egilsson et al. | Mar 2002 | B1 |
6363353 | Chen | Mar 2002 | B1 |
6363393 | Ribitzky | Mar 2002 | B1 |
6366905 | Netz | Apr 2002 | B1 |
6366922 | Althoff | Apr 2002 | B1 |
6374234 | Netz | Apr 2002 | B1 |
6374263 | Bunger et al. | Apr 2002 | B1 |
6377934 | Chen et al. | Apr 2002 | B1 |
6381605 | Kothuri et al. | Apr 2002 | B1 |
6401117 | Narad et al. | Jun 2002 | B1 |
6405173 | Honarvar et al. | Jun 2002 | B1 |
6405207 | Petculescu et al. | Jun 2002 | B1 |
6411313 | Conlon et al. | Jun 2002 | B1 |
6411681 | Nolting et al. | Jun 2002 | B1 |
6411961 | Chen et al. | Jun 2002 | B1 |
6418427 | Egilsson et al. | Jul 2002 | B1 |
6418450 | Daudenarde | Jul 2002 | B2 |
6421730 | Narad et al. | Jul 2002 | B1 |
6424979 | Livingston et al. | Jul 2002 | B1 |
6430545 | Honarvar et al. | Aug 2002 | B1 |
6430547 | Busche et al. | Aug 2002 | B1 |
6434557 | Egilsson et al. | Aug 2002 | B1 |
6438537 | Netz et al. | Aug 2002 | B1 |
6442269 | Ehrlich et al. | Aug 2002 | B1 |
6442560 | Berger et al. | Aug 2002 | B1 |
6446059 | Berger et al. | Sep 2002 | B1 |
6446061 | Doerre et al. | Sep 2002 | B1 |
6453322 | DeKimpe et al. | Sep 2002 | B1 |
6456999 | Netz | Sep 2002 | B1 |
6460031 | Wilson et al. | Oct 2002 | B1 |
6470344 | Kothuri et al. | Oct 2002 | B1 |
6473750 | Petculescu et al. | Oct 2002 | B1 |
6480842 | Agassi et al. | Nov 2002 | B1 |
6480848 | DeKimpe et al. | Nov 2002 | B1 |
6480850 | Veldhuisen | Nov 2002 | B1 |
6484179 | Roccaforte | Nov 2002 | B1 |
6487547 | Ellison et al. | Nov 2002 | B1 |
6460026 | Pasumansky et al. | Dec 2002 | B1 |
6493718 | Petculescu et al. | Dec 2002 | B1 |
6493723 | Busche | Dec 2002 | B1 |
6510457 | Ayukawa et al. | Jan 2003 | B1 |
6513019 | Lewis | Jan 2003 | B2 |
6532458 | Chaudhuri et al. | Mar 2003 | B1 |
6535866 | Iwadate | Mar 2003 | B1 |
6535868 | Galeazzi et al. | Mar 2003 | B1 |
6542886 | Chaudhuri et al. | Apr 2003 | B1 |
6542895 | DeKimpe et al. | Apr 2003 | B1 |
6546395 | DeKimpe et al. | Apr 2003 | B1 |
6546545 | Honarvar et al. | Apr 2003 | B1 |
6549907 | Fayyad et al. | Apr 2003 | B1 |
6557008 | Temple et al. | Apr 2003 | B1 |
6560594 | Cochrane et al. | May 2003 | B2 |
6567796 | Yost et al. | May 2003 | B1 |
6567814 | Bankier et al. | May 2003 | B1 |
6581054 | Bogrett | Jun 2003 | B1 |
6581068 | Bensoussan et al. | Jun 2003 | B1 |
6587547 | Zirngibl et al. | Jul 2003 | B1 |
6587857 | Carothers et al. | Jul 2003 | B1 |
6601034 | Honarvar et al. | Jul 2003 | B1 |
6604135 | Rogers et al. | Aug 2003 | B1 |
6606638 | Tarin | Aug 2003 | B1 |
6609120 | Honarvar et al. | Aug 2003 | B1 |
6615096 | Durrant et al. | Sep 2003 | B1 |
6628312 | Rao et al. | Sep 2003 | B1 |
6633875 | Brady | Oct 2003 | B2 |
6643608 | Hershey et al. | Nov 2003 | B1 |
6671715 | Langseth et al. | Dec 2003 | B1 |
6677963 | Mani et al. | Jan 2004 | B1 |
6678674 | Saeki | Jan 2004 | B1 |
6691118 | Gongwer et al. | Feb 2004 | B1 |
6691140 | Bogrett | Feb 2004 | B1 |
6694316 | Langseth et al. | Feb 2004 | B1 |
6707454 | Barg et al. | Mar 2004 | B1 |
6708155 | Honarvar et al. | Mar 2004 | B1 |
6738975 | Yee et al. | May 2004 | B1 |
6816854 | Reiner et al. | Nov 2004 | B2 |
6826593 | Acharya et al. | Nov 2004 | B1 |
6836894 | Hellerstein et al. | Dec 2004 | B1 |
6842758 | Bogrett | Jan 2005 | B1 |
6867788 | Petculescu et al. | May 2005 | B1 |
6898603 | Petculescu et al. | May 2005 | B1 |
6934687 | Papierniak et al. | Aug 2005 | B1 |
6947934 | Chen et al. | Sep 2005 | B1 |
Number | Date | Country |
---|---|---|
0 314 279 | May 1989 | EP |
0 743 609 | Nov 1996 | EP |
0 336 584 | Feb 1997 | EP |
0 869 444 | Oct 1998 | EP |
WO 9508794 | Mar 1995 | WO |
WO 9840829 | Sep 1998 | WO |
WO9849636 | Nov 1998 | WO |
WO 9909492 | Feb 1999 | WO |
Number | Date | Country | |
---|---|---|---|
20030200221 A1 | Oct 2003 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10136937 | May 2002 | US |
Child | 10314868 | US | |
Parent | 09634748 | Aug 2000 | US |
Child | 10136937 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09514611 | Feb 2000 | US |
Child | 09634748 | US | |
Parent | 09368241 | Aug 1999 | US |
Child | 09514611 | US |