The present invention relates generally to a method and system for providing business intelligence data, and in particular to increasing the efficiency of and resources using in obtaining business intelligence data by using data storage structures in the form of baby fact tables.
Business intelligence is a set of methodologies, processes, architectures, and technologies that transform raw data into meaningful and useful information used to enable more effective strategic, tactical, and operational insights and decision-making for an organization. Better decision-making is the driver for business intelligence. Generally, the process of providing business intelligence data starts with the determination of what kinds of summaries and reports a user may be interested in. Key business users are queried to determine the types of reports and summaries that they may be interested in. Due to the amount of resources required to effect changes in the types of reports and summaries that are generated, significant care is taken in designing these reports and summaries.
Once the required reports and summaries have been identified, the data required to generate these summaries and reports is determined. The data is typically stored by one or more Enterprise Information Systems (“EISes”), such as an Enterprise Resource Planning (“ERP”) system. These EISes are referred to herein as “source systems”. The particular location of the data in the source systems is noted, and extraction functions are coded to extract the data from the specific locations. In general, the goal of the extraction phase is to convert the data into a single format that is appropriate for transformation processing. Thus, the extraction functions not only retrieve the data from the source systems, but they parse and align the data with other data from the same or other source systems. As extraction functions have to be manually coded and tested for the data from each specific location, this step can be lengthy.
Transformation functions are then designed to transform and structure the data extracted from the source system(s) to enable rapid generation of the desired summaries and reports. The transform stage applies a series of rules or functions to the data extracted from the source system(s) to derive the data for loading into the end target. Some data sources require very little or even no manipulation of data. In other cases, one or more transformations may be required to be applied to the extracted data to meet the business and technical needs of a target database that is used to generate reports and summaries. Depending on the amount of transforming, the design and testing of the transformation functions can be a lengthy procedure.
Transformed data is then loaded into an end target, typically a data warehouse, that can be queried by users via business intelligence clients. Depending on the requirements of the organization, this process varies widely. Some data warehouses may overwrite existing information with cumulative information, frequently updating extract data is done on daily, weekly or monthly basis. Other data warehouses (or even other parts of the same data warehouse) may add new data in a historicized form, for example, hourly.
This process of providing business intelligence data is very manually intensive and requires significant expertise. The entire process typically takes from two to six months. Further, changes to the structure and/or format of the data in the source systems to be extracted can require significant manual recoding of the extraction functions. Further, changes to the information desired from the summaries and reports can require significant recoding of the extraction, transformation and load functions. As this is generally performed manually, the effort required can be substantial and is very sensitive to human error.
Accordingly, it is an object of the invention to provide a novel method and system for providing business intelligence data.
According to one embodiment of the invention there is provided a machine implemented method for collecting business intelligence data including the steps of providing a master database table on a computer readable medium and accessible by an analytics server; accessing rows in the master database table by the analytics server, where each row in the master database table contains at least a partial set of measures for a plurality of dimensions; identifying by the analytics server a set of one or more dimensions from the plurality of dimensions subject to a query by one or more computing devices in communication with the analytics server; extracting the set of one or more dimensions and associated measures for each of the one or more dimensions from the master database table; and, forming a baby fact table, where each row in the baby fact table contains the set of one or more dimensions subject to the query and the associated measures derived from the extracting step.
According to one aspect of the invention, the master database table comprises a plurality of tables, each the plurality of tables related to a category of business intelligence data information.
According to another aspect of the invention, the method further includes identifying by the analytics server further sets of one or more dimensions from the plurality of dimensions subject to additional queries; forming one or more additional baby fact tables, wherein each row in each the one or more additional baby fact tables contains a unique set of one or more dimensions subject to one of the additional queries and measures associated with dimensions in the unique set.
According to another aspect of the invention, the method further includes prior to the providing step, importing by the analytics server data aggregated from one or more computer readable data sources in the form of source data, generating one or more dimensions from the source data, wherein the one or more dimensions define categories into which portions of the normalized data can be grouped, generating one or more measures from the source data linked to the one or more dimensions, and storing the one or more dimensions and one or more measures in the master database table.
According to another aspect of the invention, the master database table comprises a plurality of tables, each the plurality of tables related to a category of business intelligence data information.
According to another aspect of the invention, the method further includes storing a record of the query and the each additional queries on the computer readable medium.
According to another aspect of the invention, the method further includes forming the plurality of baby fact tables based on the record of the query and the each additional queries prior to a subsequent step of importing source data.
According to another aspect of the invention, the method further includes creating a facts table on the computer readable medium storing an identification of each the plurality of baby fact tables.
According to another aspect of the invention, the method further includes generating a data cube based on information in any one of the baby fact tables.
According to another aspect of the invention, the method further includes upon a condition in which the data cube can be generated by more than one of the baby fact tables, selecting the baby fact table having the fewest rows for generating the data cube.
According to another embodiment of the invention, there is provided a system for collecting business intelligence data including a master database table on a computer readable medium and accessible by an analytics server and computer readable instructions on the computer readable medium for carrying out the method according to the first embodiment, including all its aspects and variations as herein described.
Embodiments will now be described, by way of example only, with reference to the attached Figures, wherein:
The invention is generally applicable to a system for providing business intelligence data using an analytics server specially adapted for this purpose. Accordingly, the description that follows below describes the general system to which the invention may be applied, followed by the use of such baby fact tables in this application and various optimizations and variations in implementation.
An analytics server 20 for providing business intelligence data and its operating environment is shown in
One or more client computing devices 28 are in communication with the analytics server 20, either directly or over a large communications network, such as the Internet 32. Client computing devices 28 may be desktop computers, notebook computers, mobile devices with embedded operating systems, etc. While, in this particular embodiment, the Internet 32 is shown, any other communications network enabling communications between the various devices can be substituted. The client computing devices 28 are personal computers that execute a business intelligence client for connecting to, querying and receiving response data from the analytics server 20.
Data Modeling
Referring now to
Once the source data is extracted from the source system 24 at 104, the source data is imported into import tables 214 (120). The analytics server 20 parses the intermediate data file 212 and constructs the import tables 214 using the source data contained therein. The import tables 214 (whose names are prefixed with “PVI”) generally match the data layout of the intermediate data file 212. These import tables 214 are stored by the analytics server 20.
Once the source data is imported into the import tables 214, it is normalized (130). The analytics engine parses the import tables 214 and identifies the organization's structure, such as its sales and plants, and the structure and recipe for its products. As the layout of the import tables 214 is defined by the established templates used to extract data from the source system 24, the data can be normalized via a set of scripts to generate normalized data tables 216. The data from the import tables 214 is reorganized to minimize redundancy. Relations with anomalies in the data are decomposed in order to produce smaller, well-structured relations. Large tables are divided into smaller (and less redundant) tables and relationships are defined between them. The normalized data tables 216 (whose names are prefixed with “PVN”) are stored by the analytics server 20.
The master data tables 300 store non time-phased elements. These include organizational data 304, inbound materials data 308, resource data 312, activity data 316 and outbound materials data 320. The organizational data 304 includes the following tables:
Referring again to
The resource data 312 is stored in PVN_Resource and includes a list of resources. A resource is any entity that is required to perform the activity but is not materially consumed by the activity (e.g., shifts, equipment, warehouses, etc.). Resources have costs that are usually time-based.
The activity data 316 is stored in PVN_Activity and its associated tables include activity recipes, bills of materials, and routings. An activity is a required task that is needed to add economic value to the product that an organization sells (e.g., press alloy 6543 with die #1234). An activity type is a group of activities that performs similar tasks (e.g., extrusion, painting, cooking, etc.). The activities are linked together via inbound materials and outbound materials. For example, a first activity can produce a widget. A second activity for making a multi-widget may require widgets as inbound materials. As a result, the first activity is performed to produce a widget that is then an inbound material for the second activity.
Returning again to
Production transaction data 328 relating to materials produced is stored in PVN_Production, and includes:
Referring again to
Each master and transaction table has its own associated attribute table 336. For example, PVN_Material has an associated PVN_MaterialAttribute table. These attribute tables 336 allow the analytics engine to aggregate the data in different ways. User-defined attributes can be introduced in these attribute tables 336. In this manner, the data can be aggregated in user-defined manners. The name and description of the attributes are defined in the table PVN_AttributeCategory. The maximum length of the name of an attribute is 25 characters and the maximum length of the value is 50 characters.
Each entity (activity, material, resource, sales, production) that participates in the manufacturing process has its own associated measure table 340. These measure tables 340 record the data for that entity when it takes part in the process. The name and description of the measures have to be defined in the table PVN_MeasureCategory. The maximum length of the name of a measure is 25 characters and the value is always stored as decimal (20.6).
As illustrated, the final activity, Activity N, has two pre-requisite activities, Activity N−1 (branch 1) and Activity M (branch 2). Each activity along the upper and lower branches of the material activity tree requires that the preceding activity along the branch be completed before it can be performed. A simplified example of such a process may be the production of a simple product, wherein the upper branch represents the assembly of the product from simple components bought from external suppliers, and the lower branch represents the production of packaging for the product. Thus, the last activity, Activity N, may be the placement of the product in the packaging.
The pseudo-code for the importation of source data from the intermediate data file 212 is shown in Appendix A.
Turning back to
The master and transaction data tables 300, 302 contain pre-defined attributes that can be used by analysis. For example, the “PartnerName” attribute of the PVN_BusinessPartner identifier for a partner allows business intelligence applications to aggregate the data according to the customer name instead of customer ID.
The following normalized data tables 216 are used to help generate the dimension and fact tables.
These dimension-related tables are normalized in order to provide flexibility to add extra attributes. In order to add an extra attribute to a dimension (say PVN_Production), the attribute is first specified in the PVN_AttributeCategory. The attribute values are then added to the corresponding attribute table (i.e., PVN_ProductionAttribute).
Without further processing, the normalized data tables 216 (i.e., the master and transaction data tables named PVN_*) with their associated attribute tables form the “snowflake schema” for a basic data cube.
These dimension-related tables then are de-normalized to form a “star schema”. Only one joint operation is needed to form the cube from the de-normalized dimension-related tables during runtime, increasing efficiency. Also, the attribute value can be transformed to provide more meaningful grouping. For example, a custom de-normalization process can classify the weight (an attribute) of a material to be “heavy” or “light” if the value is “<50 kg”. Further, some entities like material (using_PVN_MaterialBOM) and activity (using PVN_ActivityGrouping) have hierarchy support. The hierarchy is flattened during the de-normalization process.
For example, for the following material related tables,
PVN_Material:
PVN_AttributeCategory:
PVN_MaterialAttribute:
PVN_MaterialBOM:
the generated material dimension table, PVD_Material, is:
The PVD_Material (or any hierarchical) dimension table includes the attributes for the TOP (i.e., PV_MaterialID) and the LEAF (PV_LEVEL_LEAF). The attributes for the LEAF are prefixed by “PV_”. For example, attribute “Description” refers to the TOP and “PV_Description” refers to the LEAF.
The following attributes that are added if the level is greater than 1.
The pseudo-code of standard de-normalization process is included in Appendix B.
Once the dimensions are generated, measures are calculated (150). Measures include both raw and calculated values. An example of a raw value is the sale price of a material. An example of a calculated value is the cost of a piece of machinery allocated to the manufacture of the material.
Next, allocation measures are calculated (152). Allocation measures include costs that indirectly affect the production of a material. For example, an allocation measure can be the portion of the cost of the construction of a plant or the company's human resources department allocated to the manufacture of the material. Like rate/driver measures, allocation measures can be time-based, volume-based, etc.
Finally, hierarchical structure measures are calculated (153). Hierarchical structure measures are composite measures corresponding to a set of related hierarchically-related objects, events, etc.
A beneficial feature of the analytics server 20 is that it can perform rapid simulation or recalculation of costs that are derived from the bill of materials of a particular product or service. For example, the analytics server 20 can quickly determine what is the updated cost for the end-material if the cost of one or more raw materials of that product is updated. The same is true for other measures associated with the BOM components, such as durations, equipment costs, customer costs, other group-by-combination costs, etc.
In order to determine a revised cost for the end-material for an updated component (material, equipment and/or resource) cost, the BOM of the end-material may be traversed with the updated component (material and equipment) costs. A traversal of the BOM means that the BOM structure is repeatedly queried every time a cost update is being performed. Since this is an often-performed use case, it can be inefficient to continue to traverse the BOM every time. This can become particularly evident when the impact of a change in a raw material is measured across the business of an entire organization.
The analytics server 20 facilitates recalculation of measures related to a BOM by determining the cost relationship between the finished product (i.e., the end-material) and its components (i.e., the raw material inputs).
The total product cost (minus the transfer fees, taxes, etc.), consists of all the material and processing costs of that product. The cost contribution of a component material or equipment can be represented by the following:
PRODUCT_COSTcom=COMper×COMcost
PRODUCT_COSTcom represents the cost of a single component relative to a single unit of the finished product. COMper represents the fraction or percentage representing the weight of the COMcost to PRODUCT_COSTcom. For example, if both the finished product and the component have the same unit of measure, then it would be Qcom (quantity of the component used in the BOM to make a single unit of the finished product). COMcost is the per-unit cost of the component used in the BOM of the finished product. Thus,
TOTAL_PRODUCT_COST=ΣPRODUCT_COSTcom
for all components in the BOM.
The above technique can even be more generalized. Since any group-by's consume or are related to products, the use of percentages can be applied to the group-by's themselves. For example, if it is desired to determine the raw material cost impact to a list of customers, the COMper can be calculated for each component and customer combination.
During the calculation phase, the engine actually calculates the following fields PV_FinishedQty, and PV_ComponentQty for every component used in all the product's BOM. The division of these two measures effectively yields the COMper value. These two measures are stored in the base fact table, and also exist within other derived fact tables.
A similar technique can be used for equipment or resources, but instead of using quantity, the total hours and component durations are stored.
It is during the process of updating scenario, when the value of the corresponding COMper is required.
This technique is more efficient, and scaleable, especially when the intermediate dimensions between the components and group-by's are significantly large. For example, consider a scenario with ten plants, where it is desired to determine the marginal impact when the cost of ten raw materials is changed. Without the above approach, all the products from the ten plants must be located, and the BOM of each product that uses the ten raw materials must be traversed to determine the impact of the cost changes. If the number of products are in the thousands, the effort required to traverse each related BOM is significant.
Referring again to
The pseudo-code for the measure calculation process is in Appendix C.
Production and Sales Matching
In some cases, it can be desirable to more closely relate production costs for specific materials or batches of materials to specific sales. In these cases, the analytics engine attempts to match specific sales to the production of specific materials.
The common attributes that reside in both the sales documents and the production documents are product code and order number. All attributes are used first to determine the most accurate match. The attribute list is reduced with the biggest number first when a match cannot be found, yielding more general matches.
The following parameters in table PVC_FunctionParameters are used to control whether match occurs or not:
The following parameters in table PVC_FunctionParameters are used to control matching:
Entries in the ParameterName field of the form criteria.0, criterial.1, etc. specify the order of matching criteria. Each criteria consists of a list of common attributes (pre-defined or used-defined in PVN_AttributeCategory). If there are no matching transactions using criteria.n (say criteria.0), the analytics engine uses criteria.n+1 (i.e. criteria.1) to find the matching transactions. The analytics engine will stop finding until all criteria are processed. This parameter supports a special attribute “*lot”. In make-to-stock situations, the set of match transactions can be obtained by comparing the “Lot” number. The table “PVS_Lot” is used to store the lot information.
Lots are used in multiple production steps. For example,
the resulting PVS_Lot table is as follows:
Cutoff refers to the time interval (in days) in which the analytics engine will try to find a match.
Range defines a time period after the first matched transaction in which other transactions occurring in the time period are included in the matched set for calculating the weighted average.
Production determines the direction of the range for matched production transactions. The values can be “before”, “after” or “both”. The default is “before”, as production transactions usually occur before sales transactions.
Sales determines the direction of the range for matched sales transactions. The values can be “before”, “after” or “both”. The default is “after”, as sales transactions usually occur after production transactions.
The fact tables 224 include PVF_Match, which stores the matching between the sales and production records.
The following optional parameters in table PVC_FunctionParameters are used to control the calculation process of the analytics engine.
The master entities (i.e. material, resource and activity) that participate in the manufacturing process have their own associated measure tables (i.e., PVN_MaterialMeasure, PVN_ResourceMeasure and PVN_ActivityMeasure). During the calculation process, these measures are calculated and added to the transactions.
For example, the following fields are used in the PVN_ActivityMeasure tables:
The measures for transaction entities are applied directly to the transactions using PVN_ProductionMeasure and PVN_SalesMeasure. If there are some measures that apply to all transactions, these measures are specified in the PVN_GlobalTransactionMeasure. The following fields are used in these tables:
Using EntityType and EntityID, the analytics engine attributes the measure to a specific material, activity or resource. However, if a measure is associated with two or more entities (like activity and resource) at the same time, the following fields can be used instead of EntityType and EntityID:
All the functions that are used to compute the measures are defined in PVN_MeasureFunction. Some of these functions are pre-defined by the analytics engine, namely com.pvelocity.rpm.calc.ConstantMeasure and com.pvelocity.rpm.calc.TableLookupMeasure.
When ConstantMeasure function is specified, the “Measure” column represents the final value of the measure. For the following record in PVN_ProductionMeasure:
Thus, the “ProducedWeight” for the production “P−1” is 7.2.
Sometimes, the values of the measure are collected periodically. For example, the total freight cost in a month. In this case, the cost (or measure) is distributed among the transactions according to other measures. Table PVN_PeriodLookup and the TableLookupMeasure function are used to provide this capability. For example, PVN_GlobalTransactionMeasure:
and PVN_PeriodLookup:
where:
Returning again to
Updating of Transaction Data
The analytical information that is used by the analytics engine is updated periodically in order to include the newly generated transaction data. Since the calculation is time-consuming, the analytics engine provides a way to generate the new analytical information incrementally. When new transaction (i.e. sales and production) records are generated, a new database (e.g., PVRPM_Inc) is created to store the new data. It merges the original analytical information stored in another database (e.g., PVPRM_Base). It may also point to a separate user database (e.g., PVRPM_User) that stores the user preferences. The following parameters in table PVF_FunctionParameters are used by incremental measure calculation:
Cube Generation
Using the dimension tables 220 and the fact tables 224, the analytics engine generates one or more cubes 228 that represent aggregated data used in the analysis. The cubes are defined using SQL tables and are named with the prefix PV3_*. These cubes are cached to improve performance. The table PVF_CachedCubes is used to implement the cache.
The following parameters in PVF_FunctionParameters are used to control the cache:
Cubes 228 aggregate the data in the dimension tables 220 and fact tables 224.
Once the cubes 228 and the formula table 232 have been generated, business intelligence applications 236 enable users to query, manipulate and view the data therein.
Baby Fact Tables
The master facts table PVF_Facts contains data for both sales and production categories. As the master facts table can be very large in many cases, all cube formations (except drill-down and summary cube formations) accessing this master facts table can lead to very slow cubing performance. In order to accelerate the generation of cubes, the analytics engine generates subsets of the fact tables for handling specific queries. These smaller fact tables are referred to as baby fact tables.
Baby fact tables are generally much smaller than the master fact table, thus greatly accelerating access to it. In addition, its clustered index is smaller and is more prone to be cached in memory. Moreover, there is no need to include the category column, a low-selectivity column with only values ‘S’ and ‘P’, as part of the index. Often many measures only apply to one category (e.g. freight cost only meaningful in Sales queries), so a baby fact table can forgo the irrelevant measures, and no time is wasted in aggregating these measures. Often some dimensions only apply to one category (e.g. SalesRep dimension is often not populated in production records). Therefore, the clustered index can forgo irrelevant dimensions, resulting in a narrower index. Additional non-clustered indexes can be applied on each baby fact table according to runtime query patterns. In multi-user environment, I/O convention is reduced since we have two separate, independent fact tables.
The fact tables for some implementations continue can grow very large, as they put in data for greater time span, or because the nature of an organization's business and intended usage tend to result in large fact tables (e.g., the organization has lots of BOM and they need to see material linkages). The more specific the baby fact table, the better the performance. In order to make a baby fact table very specific, it should contain only the set of dimension(s) that encompass all the query group(s), plus the plan and time dimensions if not already in the set. Also, the formal semantic for a set applies here, in that the order of dimensions is insignificant. With that in mind, the basis for creating baby fact tables should not just be the category, but be generalized to include a dimension set. The following passages outline an optimization scheme based on this idea.
Similar to the way the analytics engine keeps a cache of generated cubes, the analytics engine has a fact table pool. For each query, based on its category and referenced dimension set, the analytics engine tries to find the most specific baby fact table available in the pool that can satisfy the query. Note if such a pooled table exists, it may not be the optimal one for that query, because the pooled table may still contain more dimensions than is minimally needed by the query. The analtics engine may fall back to use the category baby fact table (i.e. the one with all dimensions), if no pooled table can satisfy the request. However, the engine keeps track of the number of such fact table pool misses, and the resultant cube creation time. This information is useful to determine whether it is worthwhile to pre-generate the missing baby fact table. Also, the analytics engine maintains statistics on pool hits as well, so that it can further optimize the most-used baby fact tables (e.g. by creating more indexes or subdivide them further).
There are two ways the analytics engine populates the fact table pool. First, parameters in the PVC_RuntimeParameter table specify what baby fact tables are to be pre-generated. The category baby fact tables are automatically created at the end of the generation of measures by the analytics engine. Pool population code is also stored by the analytics server. The pool population code creates all baby fact tables explicitly specified in the PVC_RuntimeParameter table. In addition, the pool population code looks at the fact table pool misses information and creates any baby fact tables deemed worthwhile to pre-generate.
In addition, the analytics engine analyzes the statistics for the most-used baby fact tables to see if it is possible and worthwhile to create more specific baby fact tables from them.
There are 3 levels of Baby Facts Tables for improving performance:
Category Baby Facts Tables:
The master facts table is divided based on category, namely, there is one baby fact table for category Sales (PVF_Sales_Facts), and one for category Production (PVF_Production_Facts). The engine can use either baby table based on the category of the current query.
Time Aggregated Baby Facts Tables:
Since the master and category fact tables contain facts at the day level, but queries are mostly by month or week, generating baby fact tables based on category baby facts tables that are aggregated by month or by week will result in baby facts tables with fewer rows, thus improving cubing performance.
Generalized Baby Facts Tables:
To further reduce the required number of aggregations, the basis for creating baby fact tables can be further generalized to include only the set of dimensions that encompass all the query groups for a query.
The following time dimensions are used to support monthly and weekly aggregated facts tables:
The following PVC_FunctionParameters table enables specification of whether baby fact tables are generated and used at runtime:
Time aggregated category facts tables are created for the sales and production categories only, and is not applicable for user-defined categories. After the category baby fact tables (PVF_Sales_Facts, PVF_Production_Facts) are updated, the monthly and weekly aggregated category baby fact tables are generated if the corresponding function parameter described above is set to YES. These table names have the following format:
The time aggregated category baby fact tables have the same set of dimensions as the base category baby fact table (e.g. PVF_Sales_Facts) except for the time dimension. The monthly aggregated baby fact tables reference the TimeMth dimension (column name TimeMthDim) instead of the time dimension. The weekly aggregated baby fact tables reference the TimeWk dimension (column name TimeWkDim) instead of the time dimension. The clustered index created for these time aggregated category baby fact tables is the set of active dimension columns in descending order of the corresponding PVD table row count. That is, the first column in the clustered index is the dimension column of the dimension with the most entries in its dimension table. No non-clustered index is created.
Creation of generalized baby fact tables is invoked after generating time aggregated category baby facts tables. This is driven by the content of the new facts table pool table PVF_FactsTablePool. The generalized baby fact tables may be aggregated by month or week if the corresponding function parameter is set to YES. The fact table pool entries in PVF_FactsTablePool table are manually defined by the implementer of the application. If the PVF_FactsTablePool table does not exist in the database, it will be created automatically upon first use. This table consists of two columns:
The format for the fact table signature specified in column PVF_FactsTablePool.FactsSignature is:
A non-clustered index will be created on the baby fact table based on the order of the dimension names in the signature. Examples of fact table signatures:
The clustered index created on generalized baby fact tables is the set of active dimension columns in descending order of the corresponding PVD table row count. That is, the first column in the clustered index is the dimension column of the dimension with the most entries in its dimension table. One non-clustered index is created based on the order of dimension names in the fact table signature.
The generalized baby fact tables specified in the PVF_FactsTablePool table are loaded into memory at startup of the analytics engine. When a query requires a new cube to be created, the engine will try to find a baby fact table in the pool that will satisfy the query and give the best cube population performance. The choice of baby fact table is based on the following factors:
If no suitable generalized baby fact table is found, cube population will try to use a time aggregated category fact table if the time dimension requirement matches, and the corresponding PVC_FunctionParameters is set. If still not suitable, the base category fact table will be used.
The factors that determine the time dimension required for the query are:
Measure Derivation
An enterprise business intelligence application 236 allows the user using the business intelligence client 240 to pick a cube 228 for profit analysis by choosing different measures using the values in PVN_MeasureCategory, and different grouping using the values in PVN_AttributeCategory. It also allows the user to add extra measures that are derived from the existing data to the selected cube 228 by using the formula table 232 for profit analysis.
After the PV Measures Tables are de-normalized, each “MeasureID” in the table “PVN_MeasureCategory” is represented by a column (or measure) in the final measure table. If the “MeasureID” is related to a resource, the “Duration” associated with the resource is also represented too. A user can add a derived measure by using the existing measures. For example, “paintLabor” and “drillLabor” may be two existing measures. A user can add a derived measure called “totalLabor” by adding “paintLabor” and “drillLabor”. A PVC_DerivedMeasure table is used to store the formula of the derived measures. There are two columns in the table:
In order to calculate the derived measures during runtime, Java bytecode is generated, loaded and executed directly using technology like Jasper or Janino (http://www.janino.net).
These derived measures are evaluated at runtime after the raw measures are aggregated. The derived measures are used to calculate the profit of a set of transactions after the revenue and cost of the transactions are aggregated.
There are two challenges to evaluating the derived measures at runtime. First, some measures have to be calculated by the extract, transform, and load (“ETL”) process for each transaction. For example, there is a need to calculate the discount for a sales transaction whose formula is: Discount$=SalePrice×DiscountRate. The ETL process can capture the discount rate and the sales price. Since the discount rate for each transaction is different, the total discount for a set of transactions may not be determined using the runtime-derived measures after the measures for the transactions are aggregated. The only way to calculate the total discount for the group is to determine the individual discount for each transaction and sum them up.
Second, some measures have to be calculated after matching. For example, if there is a desire to calculate Tax$, whose formula is: Tax$=EBIT×Tax %. Tax % is time dependent and can be different for each transaction. EBIT is a derived measure, which depends on the measures calculated from the matched production records. This situation may not be pushed to the import process because that would mean the entire import process would have to mimic the engine model and configurations to calculate EBIT. This is thus performed at the transaction level, since Tax % cannot be aggregated. The logical spot to add this functionality in is the calculation step within the analytics engine. These types of calculations can be added as the new last step of the calculation process. Further, the calculations can depend on derived measures, which means these calculation steps depend on the evaluation of other derived measures.
To overcome these challenges, the analytics engine is enhanced to calculate derived measures during the calculation phase for each transaction. These derived measures are treated as raw measures during the runtime. If a derived measure is calculated after import but before matching, the calculated value is projected to the matched transactions during the matching phase. If a derived measure is calculated after matching, the formula can include both sales and production measures.
A new column, “EvaluationPoint”, is added to the PVC_DerivedMeasure table. It can contain the follow values:
If the value of “EvaluationPoint” is NULL or the whole “EvaluationPoint” column is missing, it is treated as R and behaves the same as before.
There are three measures that control segmentation. These three measures are specified in the PVC_FunctionParameters. For example,
The formula for these derived measures have to be specified in PVC_DerivedMeasure as well. For example,
The default presentation of segmentation is to use standard cumulative view with percentile of 15% and 80%. The user can change the view by using the administrative tools, which will modify the file PVC_SegmentsSetup accordingly.
The administrative tools are used to modify the PVC_SegmentsSetup. The following fields exist in the file:
Scenarios
A scenario business intelligence application allows the user to change the measures (or derived measures) of the cube (one at a time) and examine the impact to the other measures. When this application is invoked, the data of the cube (generated as PV3_* files) is stored in memory. The updated cube information is also kept in the memory. When the scenario is saved, the most up-to-date data is saved in a table named PV4_*.
If the value of a raw measure is changed in the scenario business intelligence application, it only affects the values of the related derived measures. Since the values of the derived measures are computed during runtime, there is no extra work that the analytics engine has to do. However, if the value of a derived measure is changed, it can impact the other raw measures.
Considering the following derived measure:
labCostPerHr=Labor/Duration
The analytics engine can calculate the average labor cost using the derived measure. However, if the user would like to know what will be the impact if the labor cost is increased by 10%, the user has to make some assumptions regarding which raw measure will be kept constant.
The PVC_DerivedMeasureCallback table specifies how the raw measures will be affected if a derived measure is changed. There are 3 columns in the table
The analytics engine has the ability to model any production process consisting of multiple steps (activity), equipment (resource), and materials. The analytics engine in fact does not limit the number of steps that are involved in the process. The analytics engine employs a model that consists of materials, activities, and resources to facilitate interaction with the data via a standard BOM view.
Using the above modeling techniques, the analytics engine models the production of any product or services. Each step in the production process is in effect an activity that loads zero or more resources that consumes zero or more inbound materials and produces zero or more outbound materials. The materials can further be qualified by terms that are unique to an organization, such as “finished product”, “part”, etc. This is also true for the other modeling elements. Activities can be qualified as extrusion, paint, etc. Resources can be qualified as press, saw, oven, shift, employee, etc. The qualifications are referred to as attributes.
This BOM information can then be married with a sales transaction, a forecast, budget, a customer order, etc. The same can be said with a production order as well. The customer order refers to the material (or finished product) and the activity (or the last step) that is used to produce it. The activities are linked by the materials that are produced in the preceding step.
The model includes the following tables:
Using the relationship specified between activities via the inbound and outbound materials, the analytics engine can determine the unit cost to produce a material. For example,
The portion of the corresponding table containing the parent and child relationships for the materials, PVN_MaterialBOM, is shown in
The analytics engine can calculate the unit material cost by using the following steps:
The analytics engine can also determine the unit equipment cost for a material by using the following steps:
During the fact table calculation, the standard cost can be added using the following parameters in PVC_FunctionParameter:
On the production perspective, it should record the actual transaction cost.
In a scenario generated via the scenario business intelligence application, a user can change the BOM in order to simulate the following impacts:
In order to evaluate the impact of the changes in the BOM, “PV_MaterialID” is included in the cube as one of the “group by”. If the scenario created by a user does not contain this column, the scenario business intelligence application automatically generates another scenario by adding this extra column and storing the linking in a file called PVF_StoredScenarioMaterial.
The changes in the BOM for scenarios are stored in a set of files that correspond to the files storing the BOM information. These files are:
The layouts of these files are exactly the same as the layouts for the corresponding files except the fact that there are two extra columns:
The capacity display shows the duration that an equipment entity has been used for during a transaction and its associated profitability measures. This information can be invoked at the main menu, which shows the actual usage, or in a scenario, which shows the standard BOM usage. In order to show the actual usage, the PVN_ProductionMeasure is populated with actual data. In the scenario, the capacity display shows the equipment usage for standard BOM.
In the PVC_FunctionParameter, three parameters are added:
In the scenario business intelligence application, both raw material rate and equipment rate can be changed. In some cases, it can be convenient to group a set of materials/resources and present as a single unit to the user so that when the rate is changed, individual element within the group are updated with the single changed value. This is especially true for cases when rates are maintained in PV by periods, such as by week or month.
Material and/or resource grouping is controlled by parameters defined in the PVC_FunctionParameters table.
In querying the rate for the defined group, the record associated with the largest ID is returned. It is accomplished using the SQL function MAX( ) on the ID of the group. That is, for material grouping, MAX(PV_MaterialID) is used and for resource grouping, MAX(PV_ResourceID) is used.
Computer-executable instructions for implementing the analytics engine and/or the method for providing business intelligence data on computer system could be provided separately from the computer system, for example, on a computer-readable medium (such as, for example, an optical disk, a hard disk, a USB drive or a media card) or by making them available for downloading over a communications network, such as the Internet.
While the analytics server described in the embodiment above provides business intelligence data from a single source system, those skilled in the art will appreciate that the analytics server may receive and aggregate data from two or more source systems in providing business intelligence data.
While the invention has been described with specificity to certain operating systems, the specific approach of modifying he methods described hereinabove will occur to those of skill in the art.
While the analytics server is shown as a single physical computer, it will be appreciated that the analytics server can include two or more physical computers in communication with each other. Accordingly, while the embodiment shows the various components of the server computer residing on the same physical computer, those skilled in the art will appreciate that the components can reside on separate physical computers.
The above-described embodiments are intended to be examples of the present invention and alterations and modifications may be effected thereto, by those of skill in the art, without departing from the scope of the invention that is defined solely by the claims appended hereto.
Number | Name | Date | Kind |
---|---|---|---|
6601062 | Despande et al. | Jul 2003 | B1 |
6609123 | Cazemier | Aug 2003 | B1 |
7072891 | Lee et al. | Jul 2006 | B2 |
8311975 | Gonsalves | Nov 2012 | B1 |
20020010706 | Brickell | Jan 2002 | A1 |
20020029207 | Bakalash | Mar 2002 | A1 |
20050076045 | Stenslet et al. | Apr 2005 | A1 |
20060085401 | Anderson et al. | Apr 2006 | A1 |
20080222104 | Stewart | Sep 2008 | A1 |
20110040744 | Haas et al. | Feb 2011 | A1 |
20110196857 | Chen | Aug 2011 | A1 |
20170024421 | Kim et al. | Jan 2017 | A1 |
Entry |
---|
An Overview of Data Warehousing and OLAP Technology; Surajit Chaudhuri at al.; Mar. 1997. |
Number | Date | Country | |
---|---|---|---|
20130132139 A1 | May 2013 | US |
Number | Date | Country | |
---|---|---|---|
61559929 | Nov 2011 | US |