The present invention relates generally to payroll data systems and methods used to automate the process of translating compensation survey data into delivery form suitable for inclusion in one or more associated survey data products.
In the field of employee compensation, fresh data is one of the major ways that companies are able to remain competitive by paying their employees a fair wage, thereby allowing their employees to both feel valued within the company and to earn a sufficiently secure position for which they are compensated fairly as to remain at a reasonable compensation level.
Companies have historically purchased available compensation data from large companies that have the resources and processes in place to properly aggregate such data. Companies typically participate in such surveys by submitting their data to surveyors, which return information that has already been aggregated, anonymized, and categorized within meaningful segments; for example, national pay trends, metropolitan area pay differentials, and relative company sizes are frequently used as meaningful segment categories.
Human Resources and/or compensation professionals buy and download these results when the surveys are published. Such data is then used by the buyer, either raw or in spreadsheet form. In some instances, the compiled data is submitted to compensation software solution companies to get the data into such products in order to alleviate the difficulty of manipulating data in a spreadsheet from multiple sources.
The format in which the results data are published varies widely from survey publisher to survey publisher. Historically, these varying export formats had been converted manually in spreadsheets to a desired format that would allow for importing the data into the desired products. These transformations include multi-column concatenation, conditional concatenation, table transposals, and multi-table lookups for constructing Job Titles and Job Codes, appropriately.
In view of the foregoing, there is a clear need for the processes and transformations described here in. For example, while the resulting data are very valuable, many prior unsuccessful attempts to standardize such presentations and add new techniques and processes have historically failed to provide the results needed by the industry.
In one example embodiment, the method standardizes a plurality of variable and scalable historically manual transformations. In other words, customized presentations can be created and integrated that capture, all, some or few of the traditional factors; or instead incorporate essentially customized results capable of integration into third-party products by customers wishing their data to be presented on a so-called white glove delivery basis.
In a specific though non-limiting embodiment, the process is divided into at least 4 steps, wherein the raw data is detected, flattened, transformed, and mapped to a new set of headers and/or reordered as needed to facilitate a wide variety of survey export formats uploadable to products useful by clients using associated software.
In other embodiments, file detection automates a historic process of opening files and looking at certain attributes of the file format to confirm that the file is a certain survey from a publisher and handled appropriately after the metadata information is detected from the raw file. File formats are detected using a combination of file attributes and pattern recognition; for example, filename(s), the number of sheets in a workbook, and the number of words in workbook might be appropriate file attributes in a particular application.
In further embodiments, using information gathered during the automated file detection phase, the file is then “flattened” to the relevant table of survey data. Flattening a file could include a plurality of commands, headers, etc., to help arrive at the volume and character of data needed by the customer; for example, removing superfluous header rows above a table as well as removing any extra rows below table data that do not belong in that particular table.
In a still further embodiment, the table data has been removed and flattened from the raw file and is ready for transformations to be applied to the data.
In various other embodiments, data are represented in the thousands (Example “50.52”) would have a thousands multiplier applied (for example, “50,520.00”); columns such as currency are added for all records so that each record has an associated currency; multiple columns are concatenated or conditionally concatenated to create distinct data records; placeholders data such as hash marks, hyphens, asterisks, etc., are cleaned up from cells where no compensation data was provided; and/or organization/incumbent weighted data is split apart & tagged appropriately, all transformed variably and scalably as necessary for the application.
Headers in spreadsheets commonly span multiple rows and are merged with their accompanying cells of data to construct unique headers; see Table A below for an example of multi-row headers that need to be constructed to form distinct headers (Example: “Base Salary 25th”).
In further embodiments still, once the transformations are complete, the data are ready for mapping to an internal mapping header that will then load the data to a certain field within certain products. In one specific though limiting embodiment, data are mapped to internal fields and rearranged as needed to transform the data into an uploadable format.
In such manner, data files are processed, transformed, and made available for use in the desired products, without human intervention needed to manually process or transform the desired compensation survey export format.
In other embodiments, certain formats require intensive filtering and joining. For example, in certain tables all percentile data are consolidated into a single set, of columns, while certain columns need to be filtered (or “pivoted”) further to gather the data for a particular record.
In other embodiments, the instant method admits to filtering by percentile element, and then joining the resultant data into a more conventional table with pay elements and percentile spread across many columns, thereby allowing for more conventional importability into a database in which a single row is a record of data.
In certain embodiments, the formats have a plurality of columns in which to pivot or filter the data further during the original filtering and joining process. See Tables B & C for an example of such transformations, where “Pay Element” serves as a useful pivot for the sake of description, though ordinarily skilled artisans will appreciate that the example(s) presented herein have been greatly simplified only to show the basic transformation process that occurs.
Though the present invention has been depicted and described in detail above with respect to several exemplary embodiments, those of ordinary skill in the art will also appreciate that minor changes to the description, and various other modifications, omissions and additions may also be made without departing from either the spirit or scope thereof
Number | Date | Country | |
---|---|---|---|
63180841 | Apr 2021 | US |