This application contains subject matter that may be considered related to subject matter disclosed in U.S. patent application Ser. No. 11/510,527 (entitled “Computer-implemented systems and methods for reducing cost flow models” and filed on Aug. 25, 2006) and to U.S. patent application Ser. No. 11/370,371 (entitled “Systems and methods for costing reciprocal relationships” and filed on Mar. 8, 2006), of which the entire disclosures (including any and all figures) of these applications are incorporated herein by reference.
This document relates generally to computer-implemented cost analysis and more particularly to computer-implemented cost analysis that use cost flow models.
A cost flow model, such as an activity-based cost and management (ABC/M) model, is a multi-dimensional directed graph. It depicts how money flows in an enterprise. The nodes in the graph represent the resource, activity, or cost object accounts. The edges in the graph have a percentage on them, which defines how much money flows from a source account to a destination account.
For example in a company, money may flow through many paths, and the linkage between origin and destination can therefore become murky. Activity-based costing and management (ABC/M) systems show the flow, and can compute multi-stage partial contributions from any resource to any cost object. Graphs modeling such systems can easily have hundreds of thousands of accounts and millions of edges. Existent ABC/M systems are based on path enumeration algorithms, however, as the number of paths grows, the feasibility of “walking all paths” is significantly reduced. Following all paths is also problematic when there are cycles within the flow (reciprocal allocation models).
In accordance with the teachings provided herein, systems and methods for operation upon data processing devices are provided for analyzing costs associated with a cost flow model having components of relationships and entities. As an illustration, a system and method can be configured to receive data associated with the cost flow model that identifies the costs associated with the relationships among the entities. One or more matrices are created that are representative of the costs and the entity relationships. One or more sparse matrix operations are performed upon the created one or more matrices in order to determine cost contribution amounts from an entity to another entity. The determined cost contribution amounts for each of the entities are provided such as providing to a user or an external system.
As another illustration, a system and method can be configured based on solving a sparse system of linear equations that calculates activity based cost flow in real time, as compared to the hours, weeks, etc. as needed by current state of the art solutions. In this illustration, the system and method is neither dependent on the number of paths in a model nor the presence of reciprocal accounts (cycles). In this example, the system and method depends on the number of accounts (nodes) and edges. In addition, the system and method does not require reading and writing a significant amount of information to external storage, such as a hard drive.
The users 32 can interact with the cost flow analysis system 34 through a number of ways, such over one or more networks 36. A server 38 accessible through the network(s) 36 can host the cost flow analysis system 34. It should be understood that the cost flow analysis system 34 could also be provided on a stand-alone computer for access by a user.
The cost flow analysis system 34 can be an integrated web-based reporting and analysis tool that provides users flexibility and functionality for performing cost flow determinations and analysis. One or more data stores 40 can store the data to be analyzed by the system 34 as well as any intermediate or final data generated by the system 34. For example, data store(s) 40 can store the data representation of cost flow graph(s) 42 (e.g., the data associated with the cost flow model that identifies the costs associated with the relationships among the entities as well as one or more matrices that are representative of the costs and the entity relationships). Examples of data store(s) 40 may include relational database management systems (RDBMS), a multi-dimensional database (MDDB), such as an Online Analytical Processing (OLAP) database, etc.
The entity data can represent any number of entities, including, but not limited to, cost pools such as activity-based cost pools, process-based cost pools, and other logical groupings of money. The cost data can represent the money allocated (or to be allocated) to a particular cost pool. It should be understood that the term cost is used broadly to cover a range of possible uses of money and/or other property. For example, cost data can in some instances refer to budgeting where an actual expense does not yet exist. The relationship data includes information regarding which relationship(s) a particular entity has with another entity. Each relationship has a percentage to indicate that amount of cost that flows from a particular entity to one or more other entities.
When restating an ABC/M graph as a matrix, the assignment from node a to node b can be restated as b=a, which can then be solved as a system of linear equations. In this particular case, because of the nature of cost flow, partitioning and solving the matrix lends itself well to parallelization. An ABC/M graph can assume different forms. For example,
With respect to
x1=100.00
x2=200.00
Node x1 contributes 100% of its money to x3 node x2 contributes 100% to x4. The corresponding equations are as follows:
x3=x1
x4=x2
Node x3 contributes 50% of its money to x5, and the remaining 50% to x6; node x6 also gets 100% of x4 money:
x5=0.5x3
x6=x4+0.5x3
We can rewrite the above equations as shown at 200 in
It should be understood that similar to the other processing flows described herein, the steps and the order of the steps in the flowchart described in
where P is a permutation matrix, A is the original ABC/M matrix, PT is the inverse permutation, L is a lower triangular matrix, M is a square matrix. Notice, L corresponds to the head acyclic portion of the graph.
After the reordering has been performed, virtual nodes are constructed if desired for solving a contribution problem. The virtual nodes are used for determining contribution amounts from one or more subsets of nodes to other subsets of nodes. In other words, given a prescribed flow on an arbitrary non-empty subset of nodes, the virtual nodes are used in determining how much of that money will contribute to (end up in) another arbitrary set of target nodes. The construction of virtual nodes results in a new system of linear equations 412 that represents the node aggregation performed at step 410. Processing continues on
With reference to
It is noted that if a situation does not require a contribution system to be constructed (e.g., through step 504) and individual flow amounts on each node are only needed, then the above described process is modified in order to solve for x to obtain individual flow amounts on each node:
With reference back to the contribution situation, an example of contribution system processing is as follows. Suppose we need to compute contribution from an arbitrary subset of nodes (e.g., nodes R1 and R2 that are shown at 650 on
The first step in the contribution algorithm is to zero right-hand-sides so every flow on each node automatically becomes zero and introduce virtual nodes. This is shown as virtual nodes V1 and V2 at 700 on
Therefore we can “fix” and effectively eliminate R1 and R2 nodes from the system of linear equations (by transferring their contribution to the right hand side vector). However by zeroing out right-hand-sides we eliminated all “in-flow” into R1 and R2, thus making the network temporarily infeasible. By introducing virtual nodes not only do we restore the feasibility (e.g., virtual nodes compensate zero in-flow) but we ensure the correct amount of money on R1 and R2. Because there are no other contributors in the network we obtain the desired contribution by solving corresponding system of linear equations.
More specifically, suppose we are interested in solving contribution problem from an arbitrary non-empty set of from-nodes F:
F={F1, . . . , Fp}
to an arbitrary set of to-nodes T={T1, . . . , Tq} (Notice, F and T are disjoint: F∩T=Ø). For each node j from F we “fix” the resultant flow at x*j (where x* is the solution to the original system of linear equations Ax*=b) and introduce virtual nodes as new unknowns:
Virtual nodes' solution values are effectively ignored, since their purpose is to maintain the correct in-flow for the from-nodes. With such an approach, all matrix transformations can be done in-place, without matrix reallocation.
An example of the processing performed in step 510 of
Mx=b,MεRn×n,x,bεRn
We can say matrix M is L-dominant (or U-dominant) if most of its nonzeros are located in lower (or upper) triangular portion of the matrix. Furthermore, we require the corresponding lower (or upper) triangular part to be nonsingular:
Parallel algorithms for ABC/M matrices can be used, where typically 70% or more of nonzero coefficients are located in lower triangular factor.
Let us represent M as a sum of its lower L and strict upper triangular US parts:
M≡L+US
To rephrase, M is called L-dominant if nonz(L)>nonz(US) and det(L)≠0. U-dominant case is similar M≡U+LS, nonz(U)>nonz(LS) and det(U)≠0.
Since ABC/M matrices (linear systems) are L-dominant, a L−1 preconditioner (a single backward substitution) is performed in an iterative Krylov algorithm:
Let us partition L into four blocks as follows:
where A and C are lower triangular sub-matrices. Notice, A and C are both nonsingular, since M is assumed to be L-dominant. To proceed further we need to establish an inverse triangular decomposition for L−1, which wall play a fundamental role in constructing parallel preconditioners for ABC/M systems of linear equations:
The inverse of a lower triangular matrix L can be represented as follows:
Both, forward substitution (with either dense or sparse right-hand-sides) and the inverse triangular decomposition, require the same number of floating-point operations to solve Lx=b.
Indeed, a forward substitution x=L−1b, which takes into account sparsity of the solution vector x requires:
multiplications and additions, where supp(x)={i: xi≠0} denotes support of x (the index set of nonzero vector coefficients) lj is j-th column of matrix L. (If L and x are dense, hence nonz(lj)=n−j+1, the above expression results in well-known n2.) The inverse triangular decomposition comprises two forward substitutions and one matrix-vector multiplication:
where x=(xA, xC)T corresponding to L partitioning of x.
Similar result can also be established for a U-dominant matrix;
By virtue of the above, we observe that the bulk of sequential floating-point operations in either forward or backward substitutions can be at least partially reduced to matrix-vector product, which is trivially parallel.
Applying the same principle recursively we can further subdivide A and C matrices, and thus increase level of parallelism of L−1 preconditioner. Another attractive aspect of this approach lies in the fact that the submatrices are kept “in-place.” We do not need to explicitly extract or duplicate floating-point coefficients from the original matrix M.
The processing then considers how we shall partition the original matrix M. Let i denote a partition column, the first column of matrix C:
AεR(i−1)×(t−1),CεR(n−i+1)×(n−i+1),BεR(n−i+1)×(i−1)
A partition column is selected, which will maximize the number of nonzero coefficients in B. Indeed, by maximizing nonz(B) we transfer the bulk of sequential forward substitution operations into a perfectly scalable/parallel matrix-vector multiplication. Let nonz(lk) denote the number of nonzero coefficients in row k of L, and nonz(lk) will denote the number of nonzero coefficients in column k of L. Hence our partitioning problem can be restated as follows:
The above maximization can be reduced to a parallel enumeration of two integer n-component arrays; a very fast operation assuming compressed column and compressed row sparsity data structures are available (which is the case with ABC/M linear systems).
Examples of optimal 2 and 4-way partitions are respectively shown at 550 and 600 on
While examples have been used to disclose the invention, including the best mode, and also to enable any person skilled in the art to make and use the invention, the patentable scope of the invention is defined by claims, and may include other examples that occur to those skilled in the art. Accordingly the examples disclosed herein are to be considered non-limiting. As an illustration, the systems and methods may be implemented on various types of computer architectures, such as for example on a single general purpose computer or workstation (as shown at 750 on
As another example of the wide scope of the systems and methods disclosed herein, a cost flow analysis system can be used with many different types of graphs. As an illustration, the entities of a graph can include resources, activities and cost objects (e.g., cost pools such as organizational cost pools, activity-based cost pools, process-based cost pools, other logical groupings of money, and combinations thereof).
The nodes of the graph can represent accounts associated with the resources, activities, or cost objects. In such a graph, an edge of the graph is associated with a percentage, which defines how much money flows from a source account to a destination account. The cost flow model depicts how money flows in the enterprise, starting from the resources to the activities, and finally, to the cost objects. The cost objects can represent products or services provided by the enterprise.
Such a graph can be relatively complex as it may include over 100,000 accounts and over 1,000,000 edges. This can arise when modeling the cost flow among service department accounts in one or more large companies. Examples of service departments include human resources department, an information technology department, a maintenance department, or an administrative department. In such a situation, a cost flow analysis system determines allocation of costs for the entities in the cost flow model, thereby allowing a user to establish a cost associated with operating each of the entities in the cost flow model. The allocation of costs may include budgeting, allocating expenses, allocating revenues, allocating profits, assigning capital, and combinations thereof.
It is further noted that the systems and methods may include data signals conveyed via networks (e.g., local area network, wide area network, internet, combinations thereof, etch), fiber optic medium, carrier waves, wireless networks, etc. for communication with one or more data processing devices. The data signals can carry any or all of the data disclosed herein that is provided to or from a device.
Additionally, the methods and systems described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by the device processing subsystem. The software program instructions may include source code, object code, machine code, or any other stored data that is operable to cause a processing system to perform the methods and operations described herein. Other implementations may also be used, however, such as firmware or even appropriately designed hardware configured to carry out the methods and systems described herein.
The systems' and methods' data (e.g., associations, mappings, etc.) may be stored and implemented in one or more different types of computer-implemented ways, such as different types of storage devices and programming constructs (e.g., data stores, RAM, ROM, Flash memory, flat files, databases, programming data structures, programming variables, IF-THEN (or similar type) statement constructs, etc.). It is noted that data structures describe formats for use in organizing and storing data in databases, programs, memory, or other computer-readable media for use by a computer program.
The systems and methods may be provided on many different types of computer-readable media including computer storage mechanisms (e.g., CD-ROM, diskette, RAM, flash memory, computer's hard drive, etc.) that contain instructions (e.g., software) for use in execution by a processor to perform the methods' operations and implement the systems described herein.
The computer components, software modules, functions, data stores and data structures described herein may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations. It is also noted that a module or processor includes but is not limited to a unit of code that performs a software operation, and can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm), or as an applet, or in a computer script language, or as another type of computer code. The software components and/or functionality may be located on a single computer or distributed across multiple computers depending upon the situation at hand.
It should be understood that as used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. Finally, as used in the description herein and throughout the claims that follow, the meanings of “and” and “or” include both the conjunctive and disjunctive and may be used interchangeably unless the context expressly dictates otherwise; the phrase “exclusive or” may be used to indicate situation where only the disjunctive meaning may apply.
Number | Name | Date | Kind |
---|---|---|---|
5627973 | Armstrong et al. | May 1997 | A |
5652842 | Siegrist et al. | Jul 1997 | A |
5790847 | Fisk et al. | Aug 1998 | A |
5799286 | Morgan et al. | Aug 1998 | A |
5963910 | Ulwick | Oct 1999 | A |
5970476 | Fahey | Oct 1999 | A |
6009407 | Garg | Dec 1999 | A |
6014640 | Bent | Jan 2000 | A |
6029139 | Cunningham et al. | Feb 2000 | A |
6032123 | Jameson | Feb 2000 | A |
6078892 | Anderson et al. | Jun 2000 | A |
6115691 | Ulwick | Sep 2000 | A |
6236977 | Verba et al. | May 2001 | B1 |
6237138 | Hameluck et al. | May 2001 | B1 |
6275812 | Haq et al. | Aug 2001 | B1 |
6286005 | Cannon | Sep 2001 | B1 |
6321206 | Honarvar | Nov 2001 | B1 |
6330552 | Farrar et al. | Dec 2001 | B1 |
6490569 | Grune et al. | Dec 2002 | B1 |
6502077 | Speicher | Dec 2002 | B1 |
6526526 | Dong et al. | Feb 2003 | B1 |
6584447 | Fox et al. | Jun 2003 | B1 |
6611829 | Tate et al. | Aug 2003 | B1 |
6640215 | Galperin et al. | Oct 2003 | B1 |
6735570 | Lacy et al. | May 2004 | B1 |
6901406 | Nabe et al. | May 2005 | B2 |
6907382 | Urokohara | Jun 2005 | B2 |
6965867 | Jameson | Nov 2005 | B1 |
6970830 | Samra et al. | Nov 2005 | B1 |
7003470 | Baker et al. | Feb 2006 | B1 |
7177850 | Argenton et al. | Feb 2007 | B2 |
7308414 | Parker et al. | Dec 2007 | B2 |
7376647 | Guyan et al. | May 2008 | B1 |
20010014868 | Herz et al. | Aug 2001 | A1 |
20020013757 | Bykowsky et al. | Jan 2002 | A1 |
20020016752 | Suh | Feb 2002 | A1 |
20020046078 | Mundell et al. | Apr 2002 | A1 |
20020046096 | Srinivasan et al. | Apr 2002 | A1 |
20020072953 | Michlowitz et al. | Jun 2002 | A1 |
20020091909 | Nakanishi | Jul 2002 | A1 |
20020107723 | Benjamin et al. | Aug 2002 | A1 |
20020109715 | Janson | Aug 2002 | A1 |
20020116237 | Cohen et al. | Aug 2002 | A1 |
20020123930 | Boyd et al. | Sep 2002 | A1 |
20020123945 | Booth et al. | Sep 2002 | A1 |
20020147668 | Smith et al. | Oct 2002 | A1 |
20020169654 | Santos et al. | Nov 2002 | A1 |
20020169655 | Beyer et al. | Nov 2002 | A1 |
20020178049 | Bye | Nov 2002 | A1 |
20030018503 | Shulman | Jan 2003 | A1 |
20030023598 | Janakiraman et al. | Jan 2003 | A1 |
20030078830 | Wagner et al. | Apr 2003 | A1 |
20030083924 | Lee et al. | May 2003 | A1 |
20030083925 | Weaver et al. | May 2003 | A1 |
20030088458 | Afeyan et al. | May 2003 | A1 |
20030097292 | Chen et al. | May 2003 | A1 |
20030110072 | Delurgio et al. | Jun 2003 | A1 |
20030110080 | Tsutani et al. | Jun 2003 | A1 |
20030120584 | Zarefoss et al. | Jun 2003 | A1 |
20030120651 | Bernstein et al. | Jun 2003 | A1 |
20030126010 | Barns-Slavin | Jul 2003 | A1 |
20030149613 | Cohen et al. | Aug 2003 | A1 |
20030182387 | Geshwind | Sep 2003 | A1 |
20030208402 | Bibelnieks et al. | Nov 2003 | A1 |
20030208420 | Kansal | Nov 2003 | A1 |
20030220906 | Chickering | Nov 2003 | A1 |
20030225660 | Noser et al. | Dec 2003 | A1 |
20030236721 | Plumer et al. | Dec 2003 | A1 |
20040073496 | Cohen | Apr 2004 | A1 |
20050131802 | Glodjo | Jun 2005 | A1 |
20050171918 | Eden et al. | Aug 2005 | A1 |
20050187917 | Lawande et al. | Aug 2005 | A1 |
20050192876 | McKee, Jr. | Sep 2005 | A1 |
20060136098 | Chitrapura et al. | Jun 2006 | A1 |
20060143042 | Gragg et al. | Jun 2006 | A1 |
20060161637 | Friess et al. | Jul 2006 | A1 |
20060253403 | Stacklin et al. | Nov 2006 | A1 |
20070050282 | Chen et al. | Mar 2007 | A1 |
20070226090 | Stratton | Sep 2007 | A1 |
20080065435 | Ratzloff | Mar 2008 | A1 |
Number | Date | Country |
---|---|---|
0111522 | Feb 2001 | WO |
Number | Date | Country | |
---|---|---|---|
20090018880 A1 | Jan 2009 | US |