A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
One or more implementations relate generally to dividing a spiff budget to increase labor force productivity.
The subject matter discussed in the background section should not be assumed to be prior art merely as a result of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches, which in and of themselves may also be inventions.
A traditional method used by enterprises to increase labor force productivity is to additionally incorporate variable compensation into a labor force's guaranteed compensation plans. The variable compensation is customarily contingent on an individual contributor's performance in a directly proportional manner, i.e., higher performance yields higher variable compensation. Assuming at least that the individual contributor desires higher compensation, the individual contributor's knowledge of his or her variable compensation plan incentivizes the individual contributor to increase his or her performance yielding output and/or results that benefit the enterprise.
However, the enterprise has limited resources with which to incentivize its labor force. Accordingly, it is desirable to provide techniques that yield better allocations of limited resources to incentivize a labor force to increase productivity.
In accordance with embodiments, there are provided systems, methods, and computer program products for dividing one or more spiff budgets to increase productivity from a labor force. A system analyzes inputs and creates performance models based on the analysis. Based on the created performance models, the system then generates a budget allocation model that proposes at least one division of a budget so as to project increased productivity from a labor force.
Inputs can include data corresponding to entities within an enterprise's labor force, for example, salespeople. The data for the salespeople can include historical data (e.g., each salesperson's past performance associated with previously-assigned sales targets and sales bonuses or spiffs, etc.) or current data (e.g., each salesperson's industry, job title, skill set, customer satisfaction score, availability, etc.). Inputs can also include data corresponding to externalities, for example, customers. The data for the customers can include historical data (e.g., each customer's past purchase frequency and purchase amount of services or products, etc.) or current data (e.g., each customer's industry, customer satisfaction score, location, etc.). Inputs can also include pre-analyzed data such as previously-generated descriptive or inferential models that correlate inputs with productivity.
Performance models project anticipated labor results of entities based on the analysis. A performance model can be a linear or nonlinear model, but in any case, correlates at least one input with at least the anticipated productivity of an entity.
Budget allocation models propose a division of the budget between the labor force entities based on the performance models. The budget allocation models build on the performance models to attempt to propose at least one division that increases or optimizes overall productivity from the enterprise's labor force.
Any of the following described embodiments may be used alone or together with one another in any combination. The one or more implementations encompassed within this specification may also include embodiments that are only partially mentioned or alluded to or are not mentioned or alluded to at all in this brief summary or in the abstract. Although various embodiments may have been motivated by various deficiencies with the prior art, which may be discussed or alluded to in one or more places in the specification, the embodiments do not necessarily address any of these deficiencies. In other words, different embodiments may address different deficiencies that may be discussed in the specification. Some embodiments may only partially address some deficiencies or just one deficiency that may be discussed in the specification, and some embodiments may not address any of these deficiencies.
In the following drawings like reference numbers may be used to refer to like elements. Although the following figures depict various examples, the one or more implementations are not limited to the examples depicted in the figures.
Systems, methods, and computer program products are provided for dividing one or more budgets to increase productivity from a labor force. Although embodiments are described with reference to a particular problem of dividing a spiff budget between salespeople/salespersons within the labor force to incentivize increased sales, a skilled artisan will recognize that embodiments have wider applicability than the particular problem. Further, the term salesperson and related singular nouns can be used throughout this disclosure to encompass one or more groups of salespeople.
One of many uses cases for described embodiments is a sales manager tasked with incentivizing a sales team on a particular sales campaign. However, the sales manager has limited resources with which to compensate the sales team; in particular, the sales manager has a limited budget. Based on the limited budget, the sales manager wishes to allocate the budget as spiffs so as to maximize the sum total output of the sales team.
Environment 110 is an environment in which an on-demand database service exists. User system 112 may be any machine or system that is used by a user to access a database system 116. For example, any of user systems 112 can be a handheld computing device, a mobile phone, a laptop computer, a work station, and/or a network of such computing devices. As illustrated in
An on-demand database service, such as system 116, is a database system that is made available to outside users, who do not need to necessarily be concerned with building and/or maintaining the database system. Instead, the database system may be available for their use when the users need the database system, i.e., on the demand of the users. Some on-demand database services may store information from one or more tenants into tables of a common database image to form a multi-tenant database system (MTS). A database image may include one or more database objects. A relational database management system (RDBMS) or the equivalent may execute storage and retrieval of information against the database object(s). Application platform 118 may be a framework that allows the applications of system 116 to run, such as the hardware and/or software, e.g., the operating system. In some implementations, application platform 118 enables creation, managing and executing one or more applications developed by the provider of the on-demand database service, users accessing the on-demand database service via user systems 112, or third party application developers accessing the on-demand database service via user systems 112.
The users of user systems 112 may differ in their respective capacities, and the capacity of a particular user system 112 might be entirely determined by permissions (permission levels) for the current user. For example, where a salesperson is using a particular user system 112 to interact with system 116, that user system has the capacities allotted to that salesperson. However, while an administrator is using that user system to interact with system 116, that user system has the capacities allotted to that administrator. In systems with a hierarchical role model, users at one permission level may have access to applications, data, and database information accessible by a lower permission level user, but may not have access to certain applications, database information, and data accessible by a user at a higher permission level. Thus, different users will have different capabilities with regard to accessing and modifying application and database information, depending on a user's security or permission level, also called authorization.
Network 114 is any network or combination of networks of devices that communicate with one another. For example, network 114 can be any one or any combination of a LAN (local area network), WAN (wide area network), telephone network, wireless network, point-to-point network, star network, token ring network, hub network, or other appropriate configuration. Network 114 can include a TCP/IP (Transfer Control Protocol and Internet Protocol) network, such as the global internetwork of networks often referred to as the “Internet” with a capital “I.” The Internet will be used in many of the examples herein. However, it should be understood that the networks that the present implementations might use are not so limited, although TCP/IP is a frequently implemented protocol.
User systems 112 might communicate with system 116 using TCP/IP and, at a higher network level, use other common Internet protocols to communicate, such as HTTP, FTP, AFS, WAP, etc. In an example where HTTP is used, user system 112 might include an HTTP client commonly referred to as a “browser” for sending and receiving HTTP signals to and from an HTTP server at system 116. Such an HTTP server might be implemented as the sole network interface 120 between system 116 and network 114, but other techniques might be used as well or instead. In some implementations, the network interface 120 between system 116 and network 114 includes load sharing functionality, such as round-robin HTTP request distributors to balance loads and distribute incoming HTTP requests evenly over a plurality of servers. At least for users accessing system 116, each of the plurality of servers has access to the MTS' data; however, other alternative configurations may be used instead.
In one implementation, system 116, shown in
One arrangement for elements of system 116 is shown in
Several elements in the system shown in
According to one implementation, each user system 112 and all of its components are operator configurable using applications, such as a browser, including computer code run using a central processing unit such as an Intel Pentium® processor or the like. Similarly, system 116 (and additional instances of an MTS, where more than one is present) and all of its components might be operator configurable using application(s) including computer code to run using processor system 117, which may be implemented to include a central processing unit, which may include an Intel Pentium® processor or the like, and/or multiple processor units.
A computer program product implementation includes a non-transitory machine-readable storage medium (media) having instructions stored thereon/in, which can be used to program a computer to perform any of the processes/methods of the implementations described herein. Computer program code 126 for operating and configuring system 116 to intercommunicate and to process web pages, applications and other data and media content as described herein is preferably downloadable and stored on a hard disk, but the entire program code, or portions thereof, may also be stored in any other volatile or non-volatile memory medium or device as is well known, such as a ROM or RAM, or provided on any media capable of storing program code, such as any type of rotating media including floppy disks, optical discs, digital versatile disk (DVD), compact disk (CD), microdrive, and magneto-optical disks, and magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
Additionally, the entire program code, or portions thereof, may be transmitted and downloaded from a software source over a transmission medium, e.g., over the Internet, or from another server, as is well known, or transmitted over any other conventional network connection as is well known (e.g., extranet, VPN, LAN, etc.) using any communication medium and protocols (e.g., TCP/IP, HTTP, HTTPS, Ethernet, etc.) as are well known. It will also be appreciated that computer code for the disclosed implementations can be realized in any programming language that can be executed on a client system and/or server or server system such as, for example, C, C++, HTML, any other markup language, Java™, JavaScript, ActiveX, any other scripting language, such as VBScript, and many other programming languages as are well known may be used. (Java™ is a trademark of Sun Microsystems, Inc.).
According to some implementations, each system 116 is configured to provide web pages, forms, applications, data and media content to user (client) systems 112 to support the access by user systems 112 as tenants of system 116. As such, system 116 provides security mechanisms to keep each tenant's data separate unless the data is shared. If more than one MTS is used, they may be located in close proximity to one another (e.g., in a server farm located in a single building or campus), or they may be distributed at locations remote from one another (e.g., one or more servers located in city A and one or more servers located in city B). As used herein, each MTS could include one or more logically and/or physically connected servers distributed locally or across one or more geographic locations. Additionally, the term “server” is meant to refer to a computing device or system, including processing hardware and process space(s), an associated storage system such as a memory device or database, and, in some instances, a database application (e.g., OODBMS or RDBMS) as is well known in the art. It should also be understood that “server system” and “server” are often used interchangeably herein. Similarly, the database objects described herein can be implemented as single databases, a distributed database, a collection of distributed databases, a database with redundant online or offline backups or other redundancies, etc., and might include a distributed database or storage network and associated processing intelligence.
The data source layer 202 provides input data (or, “inputs”) for populating the storage layer 220 and can include historical data 204, current data 206, salesperson data objects 208, and other input sources 210. The foregoing examples of input data are not exhaustive or intended to be limiting. Historical data 204 can include: information pertaining to previous sales campaigns such as previously-assigned spiffs, sales quotas, target leads, closed deals, duration, frequency of contacts, lead location(s), salespeople location(s); information pertaining to salespeople's previous performance; and information pertaining to customers' previous activities. The foregoing list of historical data is not exhaustive or intended to be limiting.
Current data 206 can include: information pertaining to current sales campaigns such as active spiffs, sales quotas, target leads, sales stage, duration, last contact, lead location(s), salespeople location(s); information pertaining to salespeople; and information pertaining to customers. The foregoing list of current data is not exhaustive or intended to be limiting.
Salesperson data objects 208 can include data associated with salespersons such as industries, job titles, skill sets, customer satisfaction scores, availability, physical proximity to sales targets, sales stages, target close dates, and active deal sizes. The foregoing list of salesperson data objects is not exhaustive or intended to be limiting. The salesperson data objects 208 can be imported as objects, e.g., using SOAP or any known object access protocol, or can be imported as raw data and then the actual classes and objects can subsequently be created in working data and metadata 222 explained further below.
Other input sources 210 can include any source of data not mentioned above, for example, rules to be imported as filters and weights 224, mined data, or an incoming data stream. Component data sources in the data source layer 202 can come from any external database, for example, an existing CRM or ERP system. The input data from the data source layer 202 can be imported and converted to a format usable by other components in the system 200 using any known method or tool, for example, XSLT, XQuery, Salesforce Data Loader, or any other migration or integration methods or tools. The converted data is stored in the storage layer 220.
The storage layer 220 contains all information that is understood by the higher layers and can include working data and metadata 222, filters and weights 224, and models 226. The working data and metadata 222 can include all data imported from historical data 204, current data 206, and salesperson data objects 208. The filters and weights 224 can include all imported rules, default filters and weights, and user-created filters and weights. The models 226 are all models created by the system 200, for example, the performance models and the spiff budget allocation models. The storage layer 220 is accessed by and returns results to the higher layers.
The business intelligence layer 230 performs intelligent mining and analysis operations and can include a performance model generator 232, a spiff budget allocation model generator 234, and an analysis engine 236. The analysis engine 236 analyzes data stored in the storage layer 220. The analysis engine 236 can process on-demand (i.e., it conducts analysis in response to commands from the model generators or directly from a user), passively (i.e., it conducts analysis as a background service, process, or thread even when it has received no command to do so), or as a hybrid of the two former approaches. An advantage of on-demand processing is that the analysis can be timed to occur after critical data has been imported. An advantage of passive processing is that the processing is staggered over time and results can be made ready any time. A hybridized approach can incorporate advantages of both approaches.
The presentation layer 240 delivers and formats information from lower layers to the users and can include a scripting engine 242, a rendering engine 244, and a user input handler 246. The scripting engine 242 processes client- and server-side scripts that format content for presentation, for example, Flash, PHP, JavaScript, JSP, and/or ASP. The rendering engine 244 pre-renders images and other multimedia content for delivery to the users. The user input handler 246 handles input from the users.
The architecture of system 200 is provided only as an example and is not meant to be limiting. The various components of the architecture are logical components and may be executed by one or more applications running on one or more computers.
The analysis engine 336 analyzes data, for example, working data and metadata 322, stored in the storage layer, which results are used by the model generators 333 to generate models. For example, the performance model generator can instruct or request 370 the analysis engine 336 to generate a performance model corresponding to a particular salesperson. The performance model is then associated with the particular salesperson and a particular sales campaign and may vary its output based on the input parameters. For example, an input parameter can include historical data of the salesperson—a salesperson who historically has sold $x of a particular service or product in previous sales campaigns can be expected to continue to do so in the current sales campaign, all else equal. The created or generated performance model can output a number, for example, an anticipated sales result of the particular salesperson in the subject sales campaign. The output of the performance model or the performance model itself can be returned in the analysis results 372.
To account for the many factors that may affect a given output, the analysis engine 336 has many packages such as data mining 350, quantitative analysis 352, A/B testing 354, predictive modeling 356, predictive analytics 358, and multivariate testing 360. Each of the packages may access their own set of exclusive tools (e.g., full factorial, choice modeling, or Taguchi methods, which are exclusive to a/b testing and multivariate testing) and/or may have shared tools (e.g., average( ), sum( ), min( ), max( ), etc.) The analysis request 370 may also contain instructions on which package(s) to use; these instructions can be customized by the users and/or the model generator may have default and template instructions that are selected by the model generator intelligently in light of different scenarios. For example, the model generator 333 may issue an analysis request 370 to use the data mining package 350 and not the predictive modeling package 356 to search for patterns in large datasets with no known conditions. As another example, when certain strong correlations are known or suspected, the model generator 333 may issue an analysis request 370 to use the quantitative analysis package 352 to access a neural networks tool within the package.
The analysis request 370 may also contain filters and weights to be used as additional parameters for the analysis engine 336. For example, the user may decide that only salespeople with skill sets “Business Service Management Software”, “Cloud Computing”, and “Strategic Lead Creator” should be used for a sales campaign for a new BSM SaaS that is soon to become generally available. An appropriate filter can be created from a user interface by the user, and the filter instructions can be handled by the user input handler and forwarded to the performance model generator such that salespeople who do not meet the criteria are not selected, thus allowing the analysis engine to formulate a more efficient query plan and saving valuable computing cycles and memory space on the storage layer, the analysis engine 336, and the performance model generator. As another example, the user may recognize that a particular customer tends to prefer face-to-face interactions. The salesperson data objects' location field may then be used to calculate a proximity to the particular customer, and the proximity may be given a heavier weight in selecting the salespeople that are likely to have higher sales results with the particular customer.
In another embodiment, the filters and weights are not a part of the analysis request 370, but instead were previously saved to the storage layer and can be retrieved by the analysis engine 336 for consideration during analysis. The filters and weights can be returned to the analysis engine 336 in its data request to the storage layer upon the triggering of a specific condition. For example, whenever a customer who prefers face-to-face interactions has significant potential in a sales campaign, the weight will automatically be returned to the analysis engine 336 during analysis. The automatically-returned filters and weights can then be incorporated into the analysis results 372 and also associated with the generated models in a persistent manner so that downstream or later retrieval of the models allows the user to better understand and adjust the models. This can be useful for training the analysis engine 336 or model generators 333, for example, by allowing the user to adjust weights or choose packages and tools.
The sub-models can be stored in the storage layer for other performance models to later reference in accordance with a create-once-query-many-times scheme to save space and processing cycles. Thus, the performance model 426 can be a collection of pointers to sub-models that are stored in the storage layer. When new data enters the storage layer, the model generators can actively (i.e., upon user request to refresh) or passively (i.e., as a background service, process, or thread) refresh the sub-models so that the sub-models have up-to-date information when their data is referenced.
The sub-models can be aggregated using any of the analysis packages and tools or based on trained data. For example, using the regression analysis package, the sub-model 412 can be generated with the performance model's recognition that model 402 shows that the salesperson's sales of product P1 are converging to a number or range over time, model 404 shows increasing sales revenues until a plateau point, model 406 shows that the salesperson tends to generate higher revenues when given more time, model 408 shows that the salesperson tends to generate higher revenues when contacting a potential lead between 2 and 4 times, and model 410 shows that the salespersons tends to generate higher revenues with customers C4 and C7 both of which are leads in the current sales campaign. Thus, one potential aggregation would be to use model 404 with matching axes as a baseline model and then adjust the baseline model accordingly in light of the other models, for example, upwardly due to the relatively long duration parameter associated with the current sales campaign in light of model 406, and upwardly due to the fact that customers C4 and C7 are both target leads in the current sales campaign in light of model 410.
Various sub-models can serve as factors for calculating a confidence measure of predictive model 412. For example, model 402 may increase the performance model generator's confidence measure of predictive model 412 due to the converging nature of the data in model 402. As another example, model 410 may decrease the performance model generator's confidence measure of model 412 if another sub-model (not shown) indicates that customers C4 and C7 both purchased product P1 within the past quarter and have a product purchase frequency for P1 that is much lower than 1/Q.
The data in the sub-models of the performance model 426 is shown as curved lines, but the original data can be a collection of data points. The curved lines can represent an interpolation or extrapolation of a plurality of data points, created using any known numerical approximation method. For higher accuracy and higher computational complexity, higher-order polynomials can be used to curve-fit the data. For lower accuracy and faster computation, lower-order polynomials can be used to curve-fit the data. Irrespective of whether the data is curve-fit, it is preferable to preserve the original data points in the sub-model data to minimize error in downstream calculations.
Algorithm 1: Greedy. Using a Greedy algorithm, the model generator orders payoff quotients, PQ. A payoff quotient is defined as a revenue over a spiff value, R/V. Thus, for example, the portion of the graph at v1 in predictive model 412 of performance model 426 can be considered to have the same payoff quotient as the portion of the graph at v2, since both portions fall on the R=V line. With the ordered payoff quotients, the model generator then selects payoffs in order. For example, if a salesperson data object has a maximum payoff quotient at v3 equal to, say, PQ1, and this is the highest PQ among the predictive models corresponding to the salesperson data objects in question, then the model generator first assigns spiff v3 to the corresponding salesperson data object. The model generator then continues in this manner until all salesperson data objects participating in the sales campaign have been accounted for.
The foregoing algorithm can be further optimized by either (a) eliminating portions with non-negative second derivatives, or (b) finding all of the global and/or local maxima for the predictive models for the participating salesperson data objects and using the maxima as PQ points. Although method (b) may be faster, it may lose additional information by failing to account for error.
Algorithm 2: Simulated Annealing. Using a Simulated Annealing approach, the model generator starts with a baseline, e.g., equitable distribution where the spiff budget 520 is divided equally between all of the participating salesperson data objects in the sales campaign. Using this baseline, the model generator randomly or pseudo-randomly makes internal contributions, for example, by taking $x away from S1's subdivision and giving the $x to S5. For each internal contribution, the model generator tests whether the sum total of the anticipated payoffs increases. If the sum total increases (or optionally stays the same), the model generator keeps the internal contribution. If the sum total decreases (or optionally stays the same), the model generator discards the internal contribution. The model generator can be set to make up to n internal contributions before stopping, where n can be customized by the user depending on time and space requirements, or n can be set to a default.
Algorithm 3: Dynamic Programming. Using a Dynamic Programming approach, the model generator generates an ordered set of payoff quotients as described above. However, rather than selecting the highest payoff first, the model generator recursively takes the maximum revenue that can be attained with spiff value, v, such that M[v]=maxv
The foregoing algorithms are not an exhaustive set of example algorithms that could be used to solve the foregoing sum total performance maximization problem. However, the foregoing algorithms guarantee that the model generator will generate an allocation model that is at least as good as an equitable distribution (assuming the equitable distribution is optimal) and no worse.
Further, disclosed embodiments are advantageous in that they are data driven and present objective results. Thus, a sales manager who suspects that larger spiffs should be awarded to lower performers rather than higher performers due to a lower performer's higher elasticity of generated revenue as a function of spiff value can use the system to verify the suspicion position. Furthermore, the below-described interface allows the sales manager to adjust the weights, statistical models, and algorithms accordingly if they do not attribute proper weight to the above-described phenomenon.
In addition to presenting information, the user interface presented by the presentation layer serves to allow the user to (1) modify various parameters associated with a sales campaign in order to possibly achieve a more optimal allocation, and (2) learn the significance of various factors in the model calculation.
In one embodiment, the performance model 426 is shown graphically in the user interface, for example, as shown in
In another embodiment, the spiff budget allocation model 500 is shown graphically in the user interface, for example, as shown in
Informational frame 602 shows the salespeople participating in the sales campaign. They can be selected or de-selected through the informational frame 602. As the user submits the user's selection or de-selection of salespeople, the spiff budget allocation model can be regenerated on the fly. Disclosed embodiments allow for on-the-fly or near-real-time model regeneration due to the fact that the processor-intensive activity (e.g., generating the performance models) has already been completed by the performance model generator.
The informational frame 602 can provide additional information about each salesperson, for example, through an activation means such as mousing over the corresponding graphical object or right-clicking the object to open a menu and selecting a menu item.
Informational frame 604 shows various filters that may be applied. For example, the user can select or de-select items to exclude or include salespeople that match the filter set.
Informational frame 606 shows various spiff packages that may be applied. The performance models may include sub-models that track how a salesperson responds differently with different compensation formats. For example, salesperson S1 might historically have produced higher sales revenues when incentivized with a $15,000 luxury watch with $200 engraving rather than with $15,200 in cash. Informational frame 606 allows the user to edit spiff packages and apply the packages to individual salespersons, the expected output 690 changing on the fly. Further, the spiff budget allocation model generator might have already determined optimal spiff packages based on the aforementioned performance models. If so, the user interface can also display the determined optimal spiff packages, for example, when the user mouses over the salespeople S1 to S4 either in informational frame 602 or in the graphical representation of the spiff budget allocation model.
Informational frame 608 shows campaign details for the current sales campaign. The campaign details can also be edited by the user and saved as a new campaign in the system, for example, in another database in the storage layer.
Various modifications may be made to the method 700. For example, it is not required that S704 and S706 occur before S708. As described above, the performance models may be generated prior to creation of a new sales campaign. Further, it is optional that the system displays the results in a user interface at S712. The model data may instead be exported to another system for display or further analysis.
While one or more implementations have been described by way of example and in terms of the specific embodiments, it is to be understood that one or more implementations are not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications and similar arrangements as would be apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.