The present disclosure relates to resolving over multiple hierarchies. Specifically, various techniques and systems are provided for adjusting multiple hierarchies for consistency within levels of the hierarchies, using an optimization-based approach that results in an accurate projection across dimensions and levels in hierarchies.
The present disclosure relates to resolving over multiple hierarchies. Specifically, various techniques and systems are provided for adjusting multiple hierarchies for consistency within levels of the hierarchies, using an optimization-based approach that results in an projection across dimensions and levels in hierarchies. In an embodiment, a computer-program product may be tangibly embodied in a non-transitory machine-readable storage medium. The non-transitory machine-readable storage medium may include instructions configured to cause a data processing apparatus to receive data associated with nodes of two or more hierarchies, wherein nodes are associated with original node values, identify a common level node and a target level node for each of the hierarchies, identify a linking constraint, wherein the linking constraint includes a rule to adjust a node from a hierarchy to make it consistent with a node from another hierarchy, apply the linking constraint to the common level node of each of the hierarchies, wherein applying the linking constraint includes generating updated common node values associated with the common level nodes, and wherein updated common node values are the same node values, apply the updated common node values to the target level node of each of the hierarchies, wherein applying the updated common node values includes generating updated target node values, and generate a resolved hierarchy using the updated target node values.
In an aspect, the updated common node values have the same value as a value of one of the common level nodes before the linking constraint is applied to the common level node of each of the hierarchies. In another aspect, the computer-program product may further comprise instructions configured to cause the data processing apparatus to compute a change in target node value associated with each of the two or more hierarchies, determine that the change in target node value associated with one or more of the hierarchies are above a predetermined threshold, and re-apply the linking constraint to the common level node of each of the hierarchies, wherein applying the linking constraint includes generating alternative common node values associated with the common level nodes, wherein the alternative common node values are the same node values, and wherein the alternative common node values are different than the updated target node values. In another aspect, a target level node of a hierarchy includes a most accurate data of all nodes within the hierarchy. In another aspect, applying a linking constraint to common level nodes forces common level node values associated with the common level nodes to be the same value. In another aspect, the computer-program product may further comprise instructions configured to cause the data processing apparatus to compute disaggregation factors for each of the nodes of two or more hierarchies, and determine, using the disaggregation factors for a parent node in a hierarchy, whether a set of child nodes of the parent node comply with flow conservation properties, wherein determining whether a set of child nodes of the parent node comply with flow conservation properties includes determining if a sum of the node values of the child nodes equals the node value of the parent node. In another aspect, determining that the sum of the node values of the child nodes does not equal the node value of the parent node includes comparing the difference between the sum of the node values of the child nodes and the node value of the parent node to a predetermined threshold. In another aspect, the computer-program product may further comprise instructions configured to cause the data processing apparatus to determine that the set of child nodes of the parent node comply with flow conservation properties if the difference between the sum of the node values of the child nodes and the node value of the parent node are less than the predetermined threshold.
In another embodiment, a computing device may comprise one or more processors, and a memory having instructions stored thereon, which when executed by the one or more processors. The processor may cause the computing device to perform operations including receiving data associated with nodes of two or more hierarchies, wherein nodes are associated with original node values, identifying a common level node and a target level node for each of the hierarchies, identifying a linking constraint, wherein the linking constraint includes a rule to adjust a node from a hierarchy to make it consistent with a node from another hierarchy, applying the linking constraint to the common level node of each of the hierarchies, wherein applying the linking constraint includes generating updated common node values associated with the common level nodes, and wherein updated common node values are the same node values, applying the updated common node values to the target level node of each of the hierarchies, wherein applying the updated common node values includes generating updated target node values, and generating a resolved hierarchy using the updated target node values.
In an aspect, the updated common node values have the same value as a value of one of the common level nodes before the linking constraint is applied to the common level node of each of the hierarchies. In another aspect, the computing device may further comprise instructions, which when executed by the one or more processors, cause the computing device to perform operations including computing a change in target node value associated with each of the two or more hierarchies, determine that the change in target node value associated with one or more of the hierarchies are above a predetermined threshold, and re-apply the linking constraint to the common level node of each of the hierarchies, wherein applying the linking constraint includes generating alternative common node values associated with the common level nodes, wherein the alternative common node values are the same node values, and wherein the alternative common node values are different than the updated target node values. In another aspect, a target level node of a hierarchy includes a most accurate data of all nodes within the hierarchy. In another aspect, applying a linking constraint to common level nodes forces common level node values associated with the common level nodes to be the same value. In another aspect, the computing device may further comprise instructions, which when executed by the one or more processors, cause the computing device to perform operations including computing disaggregation factors for each of the nodes of two or more hierarchies, and determine, using the disaggregation factors for a parent node in a hierarchy, whether a set of child nodes of the parent node comply with flow conservation properties, wherein determining whether a set of child nodes of the parent node comply with flow conservation properties includes determining if a sum of the node values of the child nodes equals the node value of the parent node. In another aspect, determining that the sum of the node values of the child nodes does not equal the node value of the parent node includes comparing the difference between the sum of the node values of the child nodes and the node value of the parent node to a predetermined threshold. In another aspect, the computing device may further comprise instructions, which when executed by the one or more processors, cause the computing device to perform operations including determining that the set of child nodes of the parent node comply with flow conservation properties if the difference between the sum of the node values of the child nodes and the node value of the parent node are less than the predetermined threshold.
In another embodiment, a computer-implemented method may comprise receiving data associated with nodes of two or more hierarchies, wherein nodes are associated with original node values, identifying a common level node and a target level node for each of the hierarchies, identifying a linking constraint, wherein the linking constraint includes a rule to adjust a node from a hierarchy to make it consistent with a node from another hierarchy, applying the linking constraint to the common level node of each of the hierarchies, wherein applying the linking constraint includes generating updated common node values associated with the common level nodes, and wherein updated common node values are the same node values, applying the updated common node values to the target level node of each of the hierarchies, wherein applying the updated common node values includes generating updated target node values, and generating a resolved hierarchy using the updated target node values.
In an aspect, the updated common node values have the same value as a value of one of the common level nodes before the linking constraint is applied to the common level node of each of the hierarchies. In another aspect, the method may further comprise computing a change in target node value associated with each of the two or more hierarchies, determine that the change in target node value associated with one or more of the hierarchies are above a predetermined threshold, and re-apply the linking constraint to the common level node of each of the hierarchies, wherein applying the linking constraint includes generating alternative common node values associated with the common level nodes, wherein the alternative common node values are the same node values, and wherein the alternative common node values are different than the updated target node values. In another aspect, a target level node of a hierarchy includes a most accurate data of all nodes within the hierarchy. In another aspect, applying a linking constraint to common level nodes forces common level node values associated with the common level nodes to be the same value. In another aspect, the method may further comprise computing disaggregation factors for each of the nodes of two or more hierarchies, and determine, using the disaggregation factors for a parent node in a hierarchy, whether a set of child nodes of the parent node comply with flow conservation properties, wherein determining whether a set of child nodes of the parent node comply with flow conservation properties includes determining if a sum of the node values of the child nodes equals the node value of the parent node. In another aspect, determining that the sum of the node values of the child nodes does not equal the node value of the parent node includes comparing the difference between the sum of the node values of the child nodes and the node value of the parent node to a predetermined threshold. In another aspect, the method may further comprise determining that the set of child nodes of the parent node comply with flow conservation properties if the difference between the sum of the node values of the child nodes and the node value of the parent node are less than the predetermined threshold.
This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used in isolation to determine the scope of the claimed subject matter. The subject matter should be understood by reference to appropriate portions of the entire specification of this patent, any or all drawings, and each claim.
The foregoing, together with other features and embodiments, will become more apparent upon referring to the following specification, claims, and accompanying drawings.
In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of embodiments of the technology. However, it will be apparent that various embodiments may be practiced without these specific details. The figures and description are not intended to be restrictive.
The ensuing description provides example embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the example embodiments will provide those skilled in the art with an enabling description for implementing an example embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the technology as set forth in the appended claims.
Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed, but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
The term “machine-readable storage medium” or “computer-readable storage medium” includes, but is not limited to, portable or non-portable storage devices, optical storage devices, and various other mediums capable of storing, containing, or carrying instruction(s) and/or data. A machine-readable medium may include a non-transitory medium in which data can be stored and that does not include carrier waves and/or transitory electronic signals propagating wirelessly or over wired connections. Examples of a non-transitory medium may include, but are not limited to, a magnetic disk or tape, optical storage media such as compact disk (CD) or digital versatile disk (DVD), flash memory, memory or memory devices. A computer-program product may include code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
Furthermore, embodiments may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks (e.g., a computer-program product) may be stored in a machine-readable medium. A processor(s) may perform the necessary tasks.
Systems depicted in some of the figures may be provided in various configurations. In some embodiments, the systems may be configured as a distributed system where one or more components of the system are distributed across one or more networks in a cloud computing system.
The present disclosure relates to resolving over multiple hierarchies. Specifically, various techniques and systems are provided for adjusting multiple hierarchies for consistency within levels of the hierarchies using an optimization-based approach that results in a more accurate projection across dimensions and levels in hierarchies. Hierarchies can be represented as rooted trees and include levels representing different aggregations of the time series, for example, statistical characteristics, seasonal clusters, or a business dimension for which the demand and supply planning is taking place. Each level may contain one or more nodes, which represent a time series. For example, one hierarchy may be focused on one or more products (e.g., as created by a product manager in a product division of the organization) and another hierarchy may be focused on distribution of the one or more products (e.g., as created by a distribution manager in a distribution division of the organization). A hierarchy may include only a few nodes, may exceed millions, or somewhere in between.
Furthermore, different hierarchies within an organization may represent, and be created by, different parts of the organization. In other words, different parts of an organization may map their planning needs in different hierarchies or planning structure, and therefore using different criteria and focuses. For example, one hierarchy may be focused on one or more products (e.g. as created by a product manager in a product division of the organization) and another hierarchy may be focused on distribution of the one or more products (e.g. as created by a distribution manager in a distribution division of the organization).
Each hierarchy may include multiple aggregation levels. Each level may include a different portion of a projection segment or planning chain related to the part of the organization that the hierarchy is focused on. For example, a product dimension hierarchy may include levels focused on the brand of the product, the type of the product, the packaging of the product, the distribution of the product, and the SKU of the product, among others. Different parts or sub-parts of the organization may also focus on one or more certain levels within a planning hierarchy. Therefore, since the planning hierarchies may focus on separate, disjointed parts of the organization, an organization may be unable to view a single hierarchy or plan that focuses on all aspects of the organization.
Embodiments of the present technology include systems and methods for resolving multiple hierarchies. The hierarchies (e.g., two or more hierarchies) may be adjusted using common or overlapping levels within each of the hierarchies. For example, even though multiple hierarchies may be focused on different aspects of the time series statistical properties, or an organization, the hierarchies may overlap or share levels. The hierarchies may converge or overlap at a common or a common level (e.g. SKU-store level), or “leaf” level. Embodiments of the present technology may adjust the respective common levels of the multiple hierarchies to make them consistent or equal, which may affect the other levels of the hierarchies (e.g. a target level). The hierarchies may be adjusted or resolved so as to minimize the impact of the reconciliation on certain levels of the hierarchies, which may already be, for example, important, accurate, or effective. Such embodiments of the present technology are described herein with respect to
Embodiments of the present technology may be described herein with respect to demand projecting 125 and demand planning 130. However, embodiments of the present technology may also be applied to various other aspects of an organization, including other aspects represented, for example, elsewhere in planning system 100.
Each level within hierarchy 200 includes one or more nodes, i.e. time series. For example, root level 202 includes just one node, 1st level 204 includes two nodes, 2nd level 206 includes four nodes, and 3rd level 208 includes sixteen nodes. Each node within a level may represent a specific item within the category that the level represents. For example, if 1st level 204 of hierarchy 200 represents the product line of the product, each node within 1st level 204 may represent a different product line that is possible for the product in question. As used herein, the terms “parent” node and “child” node or “children” nodes are used to describe the relationship between nodes that are connected between different levels, where the “parent” node is the node that is one level higher in the hierarchy than the “child” node or “children nodes. For example, if root level 202 represents different products within the hierarchy, and node NO represents a dish washer product, then the nodes from 1st level 204 (i.e. the “child” or “children” nodes) that connect to node NO (i.e. the “parent” node) may represent the different types of design that exist for the dish washer product. More specifically, nodes N1 and N2 may represent different types of dish washers, such as color, functionality and capacity, respectively. In another example, if 1st level 204 represents different types of packaging within the hierarchy, and node NO represents a product that is packaged in a glass bottle, then the nodes from 2nd level 206 (i.e. the “child” or “children” nodes) that connect to node NO (i.e. the “parent” node) may represent the different sku's that may be bottled using a glass bottle.
Each node within each level may include a value that the node represents at a specific point in time. For example, if a level represents different types of packaging for a company's products, then each node within the level may include a value that is or represents the demand of the different types of packaging, at a specific point in time. More specifically, each level may include either a type of packaging (e.g. glass bottle, plastic bottle, can, etc.) or may include a numerical or other value (e.g. 1, 2, 3, etc.) that represents a type of packaging. Such values may exist within each node of each level no matter what each level represents within the hierarchy and the organization.
Since node NO may represent one type of product within a larger hierarchy produced by, for example, a certain department of an organization, one or more other nodes on root level 202 may exist, and each may represent a different type of product. Furthermore, hierarchy 200 may only be one hierarchy of the two or more hierarchies generated by the organization or by statistical reasoning. For example, each department or combinations of departments (or portions of departments) may generate their own hierarchy (e.g. projection or forecasting hierarchy) that may look similar to hierarchy 200 or a combination of portions of hierarchies that are similar to hierarchy 200. Each of the hierarchies generated within an organization may overlap or share common levels or portions of levels, but may generally be dissimilar in other respects. Therefore, the hierarchies may be resolved or adjusted to facilitate generating a single time series at some common level that includes information from individual hierarchies. As used herein (both with respect to
As noted, the two hierarchies may represent different portions, departments of an organization or a pure statistical hierarchy, and may reflect the future goals or focuses of their respective departments or statistical buckets. To adjust the two independent hierarchies into a single hierarchy (or to adjust both hierarchies to make them consistent), the hierarchies may include one or more levels and nodes within levels to be made consistent. In other words, the system may recognize that a certain level of each of the hierarchies represent a similar aspect or phase of the organization. Even though the hierarchies may have been generated from, for example, two different departments of the organization and may be generally focused on their respective departments, the hierarchies may overlap at a certain level. For example, if hierarchy A represents a projection for a product-focused department and hierarchy B represents a projection for a location-focused department, both hierarchies may include levels focused on, for example, the type of products being sold by the organization or sku-store.
Once a common level has been identified, the nodes (or, more specifically, the node values behind the nodes) may be used in process to adjust the two hierarchies. To resolve (or, for example, to reconcile) the two hierarchies, a set of linking constraints may be applied to the two hierarchies. A linking constraint is a rule that limits how node values within the hierarchies may be changed. As shown in
The combination of hierarchies 400 is similar to the combination of hierarchies 300 in
In causing the node values for the nodes in the common levels to be equal, the node values for the common level from one of the hierarchies (i.e. the node values for level 408A or for level 408B) may be changed to include the same values as the node values for the corresponding common level from the other hierarchy. In other words, the node values for level 408A may be changed so that they are equal to the node values for level 408B, or the node values for level 408B may be changed so that they are equal to the node values for level 408A. Alternatively, all of the node values in the common level (i.e. the node values for both level 408A and 408B) may be changed to be equal to a set of values different than the node values of level 408A or 408B.
The two hierarchies 500 include a common level of nodes, L31 through L3n. More specifically, nodes L31 through L31 include the combined level node values of levels 409 after the respective node values within the two hierarchies being resolved (i.e. reconciled) were brought into alignment with each other, as described above with respect to
Any time the node values for nodes within a level of a hierarchy are changed, the node values for nodes within other levels of the hierarchy may also be adjusted automatically. For example, if a common level of a hierarchy is a sku-store level of a retail business hierarchy, and numbers of each sku at the various stores of the retail company are adjusted, then the numbers for each of the different types of packaging that apply to the different sku's may also change. In other words, when a child node changes value, the values of one or more of the parent nodes of the child node may also change. Any or all of the levels within each of the hierarchies being resolved may be affected by the equalization of the common levels.
One or more of the levels within each of the hierarchies being made consistent and aligned may be recognized by the system as important, realistic, or accurate. For example, the node values within a certain level of a hierarchy may be identified by the system as having a high level of accuracy in projecting or forecasting future needs for the organization. In other words, while a set of lower level child nodes of a hierarchy may be relatively inaccurate due to the volatile and common nature of that level, higher level parent nodes may be more accurate. More specifically, for example, while the sku-store level of a hierarchy for a retail-based business may be relatively inaccurate due to the volatile and common nature of that level, one or more other levels within the hierarchy, such as the product type or packaging type levels, may be more accurate. Such an accurate or realistic level may be identified as a “target” level. An example set of target levels within hierarchies 400 may be target levels 404A. Since a target level may be considered to be the most accurate portion of a hierarchy, the system may choose to avoid changing the node values of the target level as much as possible. In other words, even though node values within other levels within the hierarchies being made consistent, reconciled or aligned may be changed during reconciliation and alignment, the system may choose to perform the process so as to affect the target level node values as little as possible, or to minimize the impact on the target level.
As noted, when the common level from the two or more hierarchies are brought into alignment by relating and transforming their node values, the resulting node values of the common level may be changed to include a variety of different values. More specifically, the node values for the resulting common level may be the same as the node values for one of the two pre-aligned common levels, or the node values for the resulting level may include a set of new node values, or a combination of both. Also as noted, when the common level of the hierarchies are aligned, other levels of the hierarchies (including a target level) may also be impacted. When a target level is identified, and the common level is adjusted for consistency, the aligned node values for the common level may be chosen so that impact on the target level node values is minimized. In other words, the node values that the common level of hierarchies A and B may be changed to be a set of values that, when compared to any other set of numbers, minimizes the amount that the target level node values are changed.
In an embodiment, the multiple hierarchies may be adjusted or resolved for consistency by minimizing the squared distance between current target values and updated values, and minimizing the squared distance between current disaggregation factors and updated disaggregation factors, and finally adding a set of linking constraints. This can be expressed in the following optimization model:
where M is the number of hierarchies, Mi is the number of target nodes in hierarchy i, wij is the weight given to target node j in hierarchy i, {circumflex over (X)}ij is the projected or forecasted value in target node j in hierarchy i, N is the number of common level nodes (which, for example, is the same for all hierarchies), αlk is the weight given to common level node k in hierarchy l, ŷkl is the value in common node k in hierarchy l, and {circumflex over (γ)}ik is the disaggregation factor in common node k in hierarchy i. The first term of this equation minimizes the squared distance between initial target level node value and its updated value, and the second term of the equation minimizes the squared distance between the initial disaggregation factor and its updated value.
For a some hierarchies, it may be necessary to aggregate a set of child nodes to get to a common node level. In such a case, ŷlk may be replaced by a sum of the child nodes, ŷlk being the parent node.
For an example as shown in
subject too
As shown in
Then, the multiple hierarchies may be adjusted over the common level by minimizing the squared distance between current target values and updated values, and minimizing the squared distance between current disaggregation factors and updated disaggregation factors, and finally add a set of linking constraints. The common levels are equal, as shown in
In another example embodiment of the present technology, consider the situation where one hierarchy generates an hourly projection and another hierarchy generates weekly projection. In that situation, the hierarchies are different, and they are also generating projections at two different frequencies. This difference in frequency may be handled by adding a weekly level common level and, a weekly target level in the hourly projection hierarchy. In other words, we aggregate the hourly projections to a week level. The disaggregation factors going from week to hours are then included in the optimization model through constrains. In a different scenario with different frequencies does not change the problem, and can be solved in the same way.
However, other constraints may be applied in the process of resolving multiple hierarchies according to embodiments of the present technology. More specifically, constraints may be applied to the hierarchies to maintain logical consistency of the planning hierarchies at all times. Block diagram 1000 includes a block 1014, which represents one or a group of flow conservation constraints. Flow conservation constraints may enforce conservation of flow within a hierarchy. For example, flow conservation constraints may enforce the rule that the node value of a parent node should equal the sum of the values of the children nodes that flow to that parent node. Applying such a rule throughout a hierarchy may help to maintain logical consistency throughout the hierarchy.
Although flow conservation constraints may enforce the rule that the node value of a parent node should equal the sum of the values of the children nodes that flow to that parent node, the constraints may enforce this rule too strictly or literally. For example, the summing of node values may cause round off error to lead to a sum that is slightly different than the node value of the parent node. Such a difference or error may cause the system to assume an inconsistency or infeasibility within the hierarchy. However, such a difference may be compared to a predetermined or dynamic threshold to determine if the difference is large enough to cause an infeasibility. For example, if the error is small enough, a relieving and augmenting variable may cause the error to be ignored by the system, and thus causing the potential infeasibility to be ignored.
Block diagram 1000 also includes a block 1016 that represents a disaggregation factor constraint that may be applied to the hierarchies. A disaggregation factor constraint may be applied to further maintain logical consistency within a hierarchy. A disaggregation factor may enforce a rule that a child node value may be a certain factor (i.e. proportion) of its parent node, and that node value may maintain that factor over time. More specifically, a disaggregation factor constraint represents the rule that the value of a parent node equals a disaggregation factor times the value of a predecessor (or child) node.
Together, the flow conservation constraint or constraints and the disaggregation factor constraint or constraints may help maintain the logical consistency of one or more hierarchies that are according to embodiments of the present technology. Therefore, the flow conservation and disaggregation factor constraints may be applied to one or more hierarchies before and during a reconciliation process, as described herein.
The flow conservation constraint and disaggregation factor constraint may be implemented into different parts of the method described herein, according to embodiments of the present technology. An example method of according to an embodiment of the present technology may include: initializing the node values for all nodes above the operational level; load statistical projection values into the operational (i.e. leaf) level nodes; aggregate the node values from the operational level up to the other levels in the architecture (e.g. root); compute disaggregation factors for each node (e.g. percent splits); fix or adjust the projection values at a desired level in each hierarchy; solve the optimization problem with linear objective; determine and apply linking constraints to the projection values; solve the optimization problem with a quadratic objective; and analyzing artificial flows.
Flow chart 1100 includes step 1102, which includes receiving data associated with nodes of two or more hierarchies, wherein nodes are associated with original node values. The data associated with nodes of the two or more hierarchies may be generated (or loaded from memory/storage) by the system implementing the method instead of being received from a different entity. The data may include information about nodes within the hierarchies, including information about levels within the hierarchy and node values for nodes within each level of the hierarchy.
Flow chart 1100 also includes step 1104, which includes identifying a common level node and a target level node for each of the hierarchies. As noted, a common level may be the lowest level of the hierarchy (e.g. that is associated with the lowest level actions of the organization, such as selling a product in a retail store of a retail organization). A target level may be considered to be the most accurate portion of a hierarchy, and during reconciliation the system may choose to avoid changing the node values of the target level as much as possible. In other words, even though node values within other levels within the hierarchies may be changed during reconciliation, the system may choose to perform the reconciliation so as to affect the target level node values as little as possible, or to minimize the impact on the target level during reconciliation.
Flow chart 1100 also includes step 1106, which includes identifying a linking constraint, wherein the linking constraint includes a rule to adjust a node from a planning hierarchy to make it consistent with a node from another planning hierarchy. A linking constraint, or rule, may include equalizing a common level of each of multiple hierarchies being reconciled. Step 1108 then includes applying the linking constraint to the common level node of each of the hierarchies, wherein applying the linking constraint includes generating updated common node values associated with the common level nodes, and wherein updated common node values are the same node values. When applying a linking constraint, such as one to equalize the common levels of the multiple hierarchies, the node values of the nodes within the operational levels of the hierarchies may be adjusted (either one or both of the operational levels). Such changes in the operational level node values may also affect node values for other levels within the hierarchies.
Flow chart 1100 also includes step 1110, which includes applying the updated common node values to the target level node of each of the hierarchies, wherein applying the updated common node values includes generating updated target node values. After the common node level values are brought into alignment across multiple hierarchies, and the common node values of one or more of the hierarchies are therefore changed, node values within other levels of the one or more hierarchies may also be changed. In other words, a change in one level of a hierarchy may cause changes to be brought throughout one or more other levels of the hierarchies.
Flow chart 1100 also includes step 1112, which includes generating a resolved planning hierarchy using the updated target node values. Once the linking constraint or constraints are applied to the multiple hierarchies, and the hierarchies have been reconciled, the system may generate a hierarchy (e.g. demand plan) that represents both hierarchies. After such a planning hierarchy is generated, it may be written or stored to memory or storage.
In some examples described herein, the systems and methods may include data transmissions conveyed via networks (e.g., local area network, wide area network, Internet, or combinations thereof, etc.), fiber optic medium, carrier waves, wireless networks, etc. for communication with one or more data processing devices. The data transmissions can carry any or all of the data disclosed herein that is provided to or from a device.
Additionally, the methods and systems described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by the device processing subsystem. The software program instructions may include source code, object code, machine code, or any other stored data that is operable to cause a processing system to perform the methods and operations described herein. Other implementations may also be used, however, such as firmware or even appropriately designed hardware configured to carry out the methods and systems described herein.
The systems' and methods' data (e.g., associations, mappings, data input, data output, intermediate data results, final data results, etc.) may be stored and implemented in one or more different types of computer-implemented data stores, such as different types of storage devices and programming constructs (e.g., RAM, ROM, Flash memory, removable memory, flat files, temporary memory, databases, programming data structures, programming variables, IF-THEN (or similar type) statement constructs, etc.). It is noted that data structures may describe formats for use in organizing and storing data in databases, programs, memory, or other computer-readable media for use by a computer program.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows and figures described and shown in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
Generally, a computer can also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a tablet, a mobile viewing device, a mobile audio player, a Global Positioning System (GPS) receiver, to name just a few. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
The computer components, software modules, functions, data stores and data structures described herein may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations. It is also noted that a module or processor includes but is not limited to a unit of code that performs a software operation, and can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm), or as an applet, or in a computer script language, or as another type of computer code. The software components or functionality may be located on a single computer or distributed across multiple computers depending upon the situation at hand.
The computer may include a programmable machine that performs high-speed processing of numbers, as well as of text, graphics, symbols, and sound. The computer can process, generate, or transform data. The computer includes a central processing unit that interprets and executes instructions; input devices, such as a keyboard, keypad, or a mouse, through which data and commands enter the computer; memory that enables the computer to store programs and data; and output devices, such as printers and display screens, that show the results after the computer has processed, generated, or transformed data.
Implementations of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated, processed communication, or a combination of one or more of them. The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a graphical system, a database management system, an operating system, or a combination of one or more of them.
The methods, systems, devices, implementations, and embodiments discussed above are examples. Various configurations may omit, substitute, or add various procedures or components as appropriate. For instance, in alternative configurations, the methods may be performed in an order different from that described, or various stages may be added, omitted, or combined. Also, features described with respect to certain configurations may be combined in various other configurations. Different aspects and elements of the configurations may be combined in a similar manner. Also, technology evolves and, thus, many of the elements are examples and do not limit the scope of the disclosure or claims.
Some systems may use Hadoop®, an open-source framework for storing and analyzing big data in a distributed computing environment. Some systems may use cloud computing, which can enable ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Some grid systems may be implemented as a multi-node Hadoop® cluster, as understood by a person of skill in the art. Apache™ Hadoop® is an open-source software framework for distributed computing. Some systems may use the SAS® LASR™ Analytic Server in order to deliver statistical modeling and machine learning capabilities in a highly interactive programming environment, which may enable multiple users to concurrently manage data, transform variables, perform exploratory analysis, build and compare models and score. Some systems may use SAS In-Memory Statistics for Hadoop® to read big data once and analyze it several times by persisting it in-memory for the entire session.
Specific details are given in the description to provide a thorough understanding of examples of configurations (including implementations). However, configurations may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configurations. This description provides examples of configurations only, and does not limit the scope, applicability, or configurations of the claims. Rather, the preceding description of the configurations will provide those skilled in the art with an enabling description for implementing described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure.
Also, configurations may be described as a process that is depicted as a flow diagram or block diagram. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional steps not included in the figure. Furthermore, examples of the methods may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks may be stored in a non-transitory computer-readable medium such as a storage medium. Processors may perform the described tasks.
Having described several examples of configurations, various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. For example, the above elements may be components of a larger system, wherein other rules may take precedence over or otherwise modify the application of the current disclosure. Also, a number of operations may be undertaken before, during, or after the above elements are considered. Accordingly, the above description does not bound the scope of the claims.
The use of “capable of”, “adapted to”, or “configured to” herein is meant as open and inclusive language that does not foreclose devices adapted to or configured to perform additional tasks or operations. Additionally, the use of “based on” is meant to be open and inclusive, in that a process, step, calculation, or other action “based on” one or more recited conditions or values may, in practice, be based on additional conditions or values beyond those recited. Headings, lists, and numbering included herein are for ease of explanation only and are not meant to be limiting.
It should be understood that as used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. Finally, as used in the description herein and throughout the claims that follow, the meanings of “and” and “or” include both the conjunctive and disjunctive and may be used interchangeably unless the context expressly dictates otherwise; the phrase “exclusive or” may be used to indicate situation where only the disjunctive meaning may apply.
Some systems may use cloud computing, which can enable ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Some systems may use the SAS® LASR™ Analytic Server in order to deliver statistical modeling and machine learning capabilities in a highly interactive programming environment, which may enable multiple users to concurrently manage data, transform variables, perform exploratory analysis, build and compare models and score. Some systems may use SAS In-Memory Statistics for Hadoop® to read big data once and analyze it several times by persisting it in-memory for the entire session. Some systems may be of other types, designs and configurations.
While the present subject matter has been described in detail with respect to specific embodiments thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, it should be understood that the present disclosure has been presented for purposes of example rather than limitation, and does not preclude inclusion of such modifications, variations or additions to the present subject matter as may be readily apparent to one of ordinary skill in the art.
While this disclosure may contain many specifics, these should not be construed as limitations on the scope or of what may be claimed, but rather as descriptions of features specific to particular implementations. Certain features that are described in this specification in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software or hardware product or packaged into multiple software or hardware products.
This is a non-provisional of and claims the benefit and priority under 35 U.S.C. § 119(e) of U.S. Provisional App. No. 62/011,379. That U.S. Provisional application was filed on Jun. 12, 2014, and is incorporated by reference herein for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
5461699 | Arbabi et al. | Oct 1995 | A |
5615109 | Eder | Mar 1997 | A |
5870746 | Knutson et al. | Feb 1999 | A |
5918232 | Pouschine et al. | Jun 1999 | A |
5953707 | Huang et al. | Sep 1999 | A |
5991740 | Messer | Nov 1999 | A |
5995943 | Bull et al. | Nov 1999 | A |
6052481 | Grajski et al. | Apr 2000 | A |
6128624 | Papierniak et al. | Oct 2000 | A |
6151584 | Papierniak et al. | Nov 2000 | A |
6169534 | Raffel et al. | Jan 2001 | B1 |
6189029 | Fuerst | Feb 2001 | B1 |
6208975 | Bull et al. | Mar 2001 | B1 |
6216129 | Eldering | Apr 2001 | B1 |
6223173 | Wakio et al. | Apr 2001 | B1 |
6230064 | Nakase et al. | May 2001 | B1 |
6286005 | Cannon | Sep 2001 | B1 |
6308162 | Ouimet et al. | Oct 2001 | B1 |
6317731 | Luciano | Nov 2001 | B1 |
6334110 | Walter et al. | Dec 2001 | B1 |
6356842 | Intriligator et al. | Mar 2002 | B1 |
6397166 | Leung et al. | May 2002 | B1 |
6400853 | Shiiyama | Jun 2002 | B1 |
6526405 | Mannila et al. | Feb 2003 | B1 |
6539392 | Rebane | Mar 2003 | B1 |
6542869 | Foote | Apr 2003 | B1 |
6564190 | Dubner | May 2003 | B1 |
6591255 | Tatum et al. | Jul 2003 | B1 |
6609085 | Uemura et al. | Aug 2003 | B1 |
6611726 | Crosswhite | Aug 2003 | B1 |
6640227 | Andreev | Oct 2003 | B1 |
6735738 | Kojima | May 2004 | B1 |
6748374 | Madan et al. | Jun 2004 | B1 |
6775646 | Tufillaro et al. | Aug 2004 | B1 |
6792399 | Phillips et al. | Sep 2004 | B1 |
6850871 | Barford et al. | Feb 2005 | B1 |
6876988 | Helsper et al. | Apr 2005 | B2 |
6878891 | Josten et al. | Apr 2005 | B1 |
6928398 | Fang et al. | Aug 2005 | B1 |
6978249 | Beyer et al. | Dec 2005 | B1 |
7072863 | Phillips et al. | Jul 2006 | B1 |
7080026 | Singh et al. | Jul 2006 | B2 |
7103222 | Peker | Sep 2006 | B2 |
7130822 | Their et al. | Oct 2006 | B1 |
7130833 | Their et al. | Oct 2006 | B2 |
7152068 | Emery et al. | Dec 2006 | B2 |
7171340 | Brocklebank | Jan 2007 | B2 |
7194434 | Piccioli | Mar 2007 | B2 |
7216088 | Chappel et al. | May 2007 | B1 |
7222082 | Adhikari et al. | May 2007 | B1 |
7236940 | Chappel | Jun 2007 | B2 |
7240019 | Delurgio et al. | Jul 2007 | B2 |
7251589 | Crowe et al. | Jul 2007 | B1 |
7260550 | Notani | Aug 2007 | B1 |
7280986 | Goldberg et al. | Oct 2007 | B2 |
7433809 | Guirguis | Oct 2008 | B1 |
7433834 | Joao | Oct 2008 | B2 |
7454420 | Ray et al. | Nov 2008 | B2 |
7516084 | Sankaran | Apr 2009 | B1 |
7523048 | Dvorak | Apr 2009 | B1 |
7530025 | Ramarajan et al. | May 2009 | B2 |
7565417 | Rowady, Jr. | Jul 2009 | B2 |
7570262 | Landau et al. | Aug 2009 | B2 |
7610214 | Dwarakanath et al. | Oct 2009 | B1 |
7617167 | Griffis et al. | Nov 2009 | B2 |
7624114 | Paulus et al. | Nov 2009 | B2 |
7660734 | Neal et al. | Feb 2010 | B1 |
7660823 | Clover | Feb 2010 | B2 |
7664618 | Cheung et al. | Feb 2010 | B2 |
7689456 | Schroeder et al. | Mar 2010 | B2 |
7693737 | Their et al. | Apr 2010 | B2 |
7702482 | Graepel et al. | Apr 2010 | B2 |
7711734 | Leonard et al. | May 2010 | B2 |
7716022 | Park et al. | May 2010 | B1 |
7774179 | Guirguis | Aug 2010 | B2 |
7941413 | Kashiyama et al. | May 2011 | B2 |
7966322 | Clover | Jun 2011 | B2 |
8005707 | Jackson et al. | Aug 2011 | B1 |
8010324 | Crowe et al. | Aug 2011 | B1 |
8010404 | Wu et al. | Aug 2011 | B1 |
8014983 | Crowe et al. | Sep 2011 | B2 |
8015133 | Wu et al. | Sep 2011 | B1 |
8087001 | Hoyek et al. | Dec 2011 | B2 |
8112302 | Trovero | Feb 2012 | B1 |
8321479 | Bley | Nov 2012 | B2 |
8326677 | Fan et al. | Dec 2012 | B1 |
8364517 | Trovero et al. | Jan 2013 | B2 |
8374903 | Little | Feb 2013 | B2 |
8489622 | Joshi et al. | Jul 2013 | B2 |
8515835 | Wu et al. | Aug 2013 | B2 |
8631040 | Jackson et al. | Jan 2014 | B2 |
8645421 | Meric et al. | Feb 2014 | B2 |
8676629 | Chien et al. | Mar 2014 | B2 |
8768866 | Desai | Jul 2014 | B2 |
9208209 | Leonard et al. | Dec 2015 | B1 |
20020169657 | Singh et al. | Nov 2002 | A1 |
20020169658 | Adler | Nov 2002 | A1 |
20030101009 | Seem | May 2003 | A1 |
20030105660 | Walsh et al. | Jun 2003 | A1 |
20030110016 | Stefek et al. | Jun 2003 | A1 |
20030154144 | Pokorny et al. | Aug 2003 | A1 |
20030187719 | Brocklebank | Oct 2003 | A1 |
20030200134 | Leonard et al. | Oct 2003 | A1 |
20030212590 | Klingler | Nov 2003 | A1 |
20040030667 | Xu et al. | Feb 2004 | A1 |
20040041727 | Ishii et al. | Mar 2004 | A1 |
20040138942 | Pearson | Jul 2004 | A1 |
20040172225 | Hochberg et al. | Sep 2004 | A1 |
20040230470 | Svilar et al. | Nov 2004 | A1 |
20050102107 | Porikli | May 2005 | A1 |
20050114391 | Corcoran et al. | May 2005 | A1 |
20050159997 | John | Jul 2005 | A1 |
20050177351 | Goldberg et al. | Aug 2005 | A1 |
20050209732 | Audimoolam et al. | Sep 2005 | A1 |
20050055275 | Newman et al. | Oct 2005 | A1 |
20050249412 | Radhakrishnan et al. | Nov 2005 | A1 |
20050271156 | Nakano | Dec 2005 | A1 |
20060063156 | Willman et al. | Mar 2006 | A1 |
20060064181 | Kato | Mar 2006 | A1 |
20060085380 | Cote et al. | Apr 2006 | A1 |
20060112028 | Xiao et al. | May 2006 | A1 |
20060143081 | Argaiz | Jun 2006 | A1 |
20060164997 | Graepel et al. | Jul 2006 | A1 |
20060241923 | Xu et al. | Oct 2006 | A1 |
20060247900 | Brocklebank | Nov 2006 | A1 |
20070011175 | Langseth et al. | Jan 2007 | A1 |
20070055604 | Their et al. | Mar 2007 | A1 |
20070094168 | Ayala et al. | Apr 2007 | A1 |
20070118491 | Baum et al. | May 2007 | A1 |
20070208608 | Amerasinghe et al. | Jun 2007 | A1 |
20070162301 | Sussman et al. | Jul 2007 | A1 |
20070203783 | Beltramo | Aug 2007 | A1 |
20070208492 | Downs et al. | Sep 2007 | A1 |
20070106550 | Umblijs et al. | Oct 2007 | A1 |
20070291958 | Jehan | Dec 2007 | A1 |
20080040202 | Walser et al. | Feb 2008 | A1 |
20080208832 | Friedlander et al. | Aug 2008 | A1 |
20080288537 | Golovchinsky et al. | Nov 2008 | A1 |
20080294651 | Masuyama et al. | Nov 2008 | A1 |
20090018996 | Hunt et al. | Jan 2009 | A1 |
20090099988 | Stokes et al. | Apr 2009 | A1 |
20090172035 | Lessing et al. | Jul 2009 | A1 |
20090216611 | Leonard et al. | Aug 2009 | A1 |
20090319310 | Little | Dec 2009 | A1 |
20100030521 | Akhrarov et al. | Feb 2010 | A1 |
20100063974 | Papadimitriou et al. | Mar 2010 | A1 |
20100106561 | Peredriy | Apr 2010 | A1 |
20100114899 | Guha et al. | May 2010 | A1 |
20100121868 | Biannic et al. | May 2010 | A1 |
20100257133 | Crowe et al. | Oct 2010 | A1 |
20110106723 | Chipley et al. | May 2011 | A1 |
20110119374 | Ruhl et al. | May 2011 | A1 |
20110145223 | Cormode et al. | Jun 2011 | A1 |
20110208701 | Jackson et al. | Aug 2011 | A1 |
20110307503 | Dlugosch | Dec 2011 | A1 |
20120053989 | Richard | Mar 2012 | A1 |
20120123994 | Lowry et al. | May 2012 | A1 |
20120310939 | Lee et al. | Dec 2012 | A1 |
20130024167 | Blair et al. | Jan 2013 | A1 |
20130024173 | Brzezicki et al. | Jan 2013 | A1 |
20130238399 | Chipley et al. | Sep 2013 | A1 |
20130268318 | Richard | Oct 2013 | A1 |
20140019088 | Leonard et al. | Jan 2014 | A1 |
20140019448 | Leonard et al. | Jan 2014 | A1 |
20140019909 | Leonard et al. | Jan 2014 | A1 |
20140032506 | Hoey | Jan 2014 | A1 |
20140257778 | Leonard et al. | Sep 2014 | A1 |
20150052173 | Leonard et al. | Feb 2015 | A1 |
20150120269 | Dannecker | Apr 2015 | A1 |
20160005055 | Sarferaz | Jan 2016 | A1 |
20160042101 | Yoshida | Feb 2016 | A1 |
Number | Date | Country |
---|---|---|
2624171 | Aug 2013 | EP |
2002017125 | Feb 2002 | WO |
2005124718 | Dec 2005 | WO |
Entry |
---|
Notice of Allowance dated Apr. 1, 2016 for U.S. Appl. No. 14/948,970, 9 pages. |
Aiolfi, Marco et al., “Forecast Combinations,” CREATES Research Paper 2010-21, School of Economics and Management, Aarhus University, 35 pp. (May 6, 2010). |
Automatic Forecasting Systems Inc., Autobox 5.0 for Windows User's Guide, 82 pp. (1999). |
Choudhury, J. Paul et al., “Forecasting of Engineering Manpower Through Fuzzy Associative Memory Neural Network with ARIMA: A Comparative Study”, Neurocomputing, vol. 47, Iss. 1-4, pp. 241-257 (Aug. 2002). |
Costantini, Mauro et al., “Forecast Combination Based on Multiple Encompassing Tests in a Macroeconomic DSGE System,” Reihe Okonomie/ Economics Series 251, 24 pp. (May 2010). |
Data Mining Group, available at http://www.dmg.org, printed May 9, 2005, 3 pp. |
Funnel Web, Web site Analysis. Report, Funnel Web Demonstration, Authenticated Users History, http://www.quest.com/funnel.sub.—web/analyzer/sample/UserHist.html (1 pg.), Mar. 2002. |
Funnel Web, Web site Analysis Report, Funnel Web Demonstration, Clients History, http://www/quest.com/funnel.sub.—web/analyzer/sample.ClientHist- .html (2 pp.), Mar. 2002. |
Garavaglia, Susan et al., “A Smart Guide to Dummy Variables: Four Applications and a Macro,” accessed from: http://web.archive.org/web/20040728083413/http://www.ats.ucla.edu/stat/sa- s/library/nesug98/p046.pdf, (2004). |
Guerard John B. Jr., Automatic Time Series Modeling, Intervention Analysis, and Effective Forecasting. (1989) Journal of Statistical Computation and Simulation, 1563-5163, vol. 34, Issue 1, pp. 43-49. |
Guralnik, V. and Srivastava, J., Event Detection from Time Series Data (1999), Proceedings of the 5th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 33-42. |
Harrison, H.C. et al., “An Intelligent Business Forecasting System”, ACM Annual Computer Science Conference, pp. 229-236 (1993). |
Harvey, Andrew, “Forecasting with Unobserved Components Time Series Models,” Faculty of Economics, University of Cambridge, Prepared for Handbook of Economic Forecasting, pp. 1-89 (Jul. 2004). |
Jacobsen, Erik et al., “Assigning Confidence to Conditional Branch Predictions”, IEEE, Proceedings of the 29th Annual International Symposium on Microarchitecture, 12 pp. (Dec. 2-4, 1996). |
Keogh, Eamonn J. et al., “Derivative Dynamic Time Warping”, In First SIAM International Conference on Data Mining (SDM'2001), Chicago, USA, pp. 1-11 (2001). |
Kobbacy, Khairy A.H., et al., Abstract, “Towards the development of an intelligent inventory management system,” Integrated Manufacturing Systems, vol. 10, Issue 6, (1999) 11 pp. |
Kumar, Mahesh, “Combining Forecasts Using Clustering”, Rutcor Research Report 40-2005, cover page and pp. 1-16 (Dec. 2005). |
Leonard, Michael et al., “Mining Transactional and Time Series Data”, abstract and presentation, International Symposium of Forecasting, 23 pp. (2003). |
Leonard, Michael et al., “Mining Transactional and Time Series Data”, abstract, presentation and paper, SUGI, 142 pp. (Apr. 10-13, 2005). |
Leonard, Michael, “Large-Scale Automatic Forecasting Using Inputs and Calendar Events”, abstract and presentation, International Symposium on Forecasting Conference, 56 pp. (Jul. 4-7, 2004). |
Leonard, Michael, “Large-Scale Automatic Forecasting Using Inputs and Calendar Events”, White Paper, pp. 1-27 (2005). |
Leonard, Michael, “Large-Scale Automatic Forecasting: Millions of Forecasts”, abstract and presentation, International Symposium of Forecasting, 156 pp. (2002). |
Leonard, Michael, “Predictive Modeling Markup Language for Time Series Models”, abstract and presentation, International Symposium on Forecasting Conference, 35 pp. (Jul. 4-7, 2004). |
Leonard, Michael, “Promotional Analysis and Forecasting for Demand Planning: A Practical Time Series Approach”, with exhibits 1 and 2, SAS Institute Inc., Cary, North Carolina, 50 pp. (2000). |
Lu, Sheng et al., “A New Algorithm for Linear and Nonlinear ARMA Model Parameter Estimation Using Affine Geometry”, IEEE Transactions on Biomedical Engineering, vol. 48, No. 10, pp. 1116-1124 (Oct. 2001). |
Malhotra, Manoj K. et al., “Decision making using multiple models”, European Journal of Operational Research, 114, pp. 1-14 (1999). |
McQuarrie, Allan D.R. et al., “Regression and Time Series Model Selection”, World Scientific Publishing Co. Pte. Ltd., 40 pp. (1998). |
Oates, Tim et al., “Clustering Time Series with Hidden Markov Models and Dynamic Time Warping”, Computer Science Department, LGRC University of Massachusetts, in Proceedings of the IJCAI-99, 5 pp. (1999). |
Park, Kwan Hee, Abstract “Development and evaluation of a prototype expert system for forecasting models”, Mississippi State University, 1990, 1 pg. |
Product Brochure, Forecast PRO, 2000, 12 pp. |
Quest Software, “Funnel Web Analyzer: Analyzing the Way Visitors Interact with Your Web Site”, http://www.quest.com/funnel.sub.—web/analyzer (2 pp.), Mar. 2002. |
Safavi, Alex “Choosing the right forecasting software and system.” The Journal of Business Forecasting Methods & Systems 19.3 (2000): 6-10. ABI/INFORM Global, ProQuest. |
SAS Institute Inc., SAS/ETS User's Guide, Version 8, Cary NC; SAS Institute Inc., (1999) 1543 pages. |
Seasonal Dummy Variables, Mar. 2004, http://shazam.econ.ubc.ca/intro/dumseas.htm, Accessed from: http://web.archive.org/web/20040321055948/http://shazam.econ.ubc.ca/intro-/dumseas.htm. |
Simoncelli, Eero, “Least Squares Optimization,” Center for Neural Science, and Courant Institute of Mathematical Sciences, pp. 1-8 (Mar. 9, 2005). |
Tashman, Leonard J. et al., Abstract “Automatic Forecasting Software: A Survey and Evaluation”, International Journal of Forecasting, vol. 7, Issue 2, Aug. 1991, 1 pg. |
Using Predictor Variables, (1999) SAS OnlineDoc: Version 8, pp. 1325-1349, Accessed from: http://www.okstate.edu/sas/v8/saspdf/ets/chap27.pdf. |
van Wijk, Jarke J. et al., “Cluster and Calendar based Visualization of Time Series Data”, IEEE Symposium on Information Visualization (INFOVIS '99), San Francisco, pp. 1-6 (Oct. 25-26, 1999). |
Vanderplaats, Garret N., “Numerical Optimization Techniques for Engineering Design”, Vanderplaats Research & Development (publisher), Third Edition, 18 pp. (1999). |
Wang, Liang et al., “An Expert System for Forecasting Model Selection”, IEEE, pp. 704-709 (1992). |
Atuk, Oguz et al., “Seasonal Adjustment in Economic Time Series,” Statistics Department, Discussion Paper No. 2002/1, Central Bank of the Republic of Turkey, Central Bank Review, 15 pp. (2002). |
Babu, G., “Clustering in non-stationary environments using a clan-based evolutionary approach,” Biological Cybernetics, Sep. 7, 1995, Springer Berlin I Heidelberg, pp. 367-374, vol. 73, Issue: 4. |
Bruno, Giancarlo et al., “The Choice of Time Intervals in Seasonal Adjustment: A Heuristic Approach,” Institute for Studies and Economic Analysis, Rome Italy, 14 pp. (2004). |
Bruno, Giancarlo et al., “The Choice of Time Intervals in Seasonal Adjustment: Characterization and Tools,” Institute for Studies and Economic Analysis, Rome, Italy, 21 pp. (Jul. 2001). |
Bradley, D.C. et al., “Quantitation of measurement error with Optimal Segments: basis for adaptive time course smoothing,” Am J Physiol Endocrinol Metab Jun. 1, 1993 264:(6) E902-E911. |
Huang, N. E. et al.,“Applications of Hilbert-Huang transform to non-stationary financial time series analysis.” Appl. Stochastic Models Bus. Ind., 19: 245-268 (2003). |
IBM, “IBM Systems, IBM PowerExecutive Installation and User's Guide,” Version 2.10, 62 pp. (Aug. 2007). |
Kalpakis, K. et al., “Distance measures for effective clustering of ARIMA time-series,”Data Mining, 2001. ICDM 2001, Proceedings IEEE International Conference on, vol., No., pp. 273-280, 2001. |
Keogh, E. et al., “An online algorithm for segmenting time series,” Data Mining, 2001, ICDM 2001, Proceedings IEEE International Conference on , vol., No., pp. 289-296, 2001. |
Keogh, Eamonn et al., “Segmenting Time Series: A Survey and Novel Approach,” Department of Information and Computer Science, University of California, Irvine, California 92697, 15 pp. (2004). |
Palpanas, T. et al, “Online amnesic approximation of streaming time series,” Data Engineering, 2004. Proceedings. 20th International Conference on , vol., No., pp. 339-349, Mar. 30-Apr. 2, 2004. |
Wang Xiao-Ye; Wang Zheng-Ou; “A structure-adaptive piece-wise linear segments representation for time series,” Information Reuse and Integration, 2004. IR I 2004. Proceedings of the 2004 IEEE International Conference on , vol., No., pp. 433-437, Nov. 8-10, 2004. |
Yu, Lean et al., “Time Series Forecasting with Multiple Candidate Models: Selecting or Combining?” Journal of System Science and Complexity, vol. 18, No. 1, pp. 1-18 (Jan. 2005). |
Wang, Ming-Yeu et al., “Combined forecast process: Combining scenario analysis with the technological substitution model,” Technological Forecasting and Social Change, vol. 74, pp. 357-378 (2007). |
Green, Kesten C. et al., “Structured analogies for forecasting” International Journal of Forecasting, vol. 23, pp. 365-376 (2007). |
Agarwal, Deepak et al., “Efficient and Effective Explanation of Change in Hierarchical Summaries”, The 13th International Conference on Knowledge Discovery and Data Mining 2007, Aug. 12-15, 2007 (10 pages). |
Hyndman, Rob J. et al., “Optimal combination forecasts for hierarchical time series”, Monash University, Department of Econometrics and Business Statistics, http://www.buseco.monash.edu.au/de)Its/ebs/pubs/w)lapers/ (2007) 23 pages. |
SAS Institute Inc., SAS/QC 9.1: User's Guide. Cary, NC: SAS Publications, (2004). |
SAS Institute Inc., SAS/QC 13.2 User's Guide. Cary, NC: SAS Publications, (2014). |
Wheeler, Donald J., and David S. Chambers. Understanding Statistical Process Control. 2nd ed. Knoxville, Tenn.: SPC Press, 1992. |
Wheeler, Donald J. Advanced Topics in Statistical Process Control. Knoxville, Tenn.: SPC Press, 1995. |
Montgomery, Douglas C. Introduction to Statistical Quality Control. 6th ed. Hoboken, N.J.: Wiley, 2009. |
Cecil Bozarth, Ph.D., Measuring Forecast Accuracy: Approaches to Forecasting : A Tutorial, Published Jan. 25, 2011, 3 pages. |
Davis Aquilano Chase, Fundamentals of Operations Management, Chapter 9 Forecasting, The McGraw-Hill Companies, Inc. 2003, 42 pages. |
DataNet Quality Systems, What are the Western Electric Rules, http://www.winspc.com/14-datanet-quality-systems/support-a-resources/179-what-are-the-western-electric-rules, (available online Apr. 14, 2014). |
First Action Interview Pilot Program Pre-Interview Communication dated Jun. 12, 2015 for U.S. Appl. No. 14/668,854, 6 pages. |
Notice of Allowance dated Oct. 5, 2015 for U.S. Appl. No. 14/668,854; 10 pages. |
Number | Date | Country | |
---|---|---|---|
20160171089 A1 | Jun 2016 | US |
Number | Date | Country | |
---|---|---|---|
62011379 | Jun 2014 | US |