The present disclosure relates to techniques for managing inventory and determining operational health scores for stores. The present disclosure also relates to methodologies, systems and devices for presenting operational health score information via a graphical user interface.
In general, a store's performance can be audited in a number of ways. Certain conventional inventory identification techniques allow a user to audit stores and perform performance reviews but do not provide a comprehensive health score incorporating numerous operational health metrics.
Exemplary embodiments of the present disclosure provide inventory identification systems, devices and methods that facilitate determining and displaying operational health scores for stores of an enterprise based on operational metrics that are used to evaluate on-shelf-availability of inventory at the stores. The operational metrics can include, for example, pick completion, items binned, bin accuracy, manual counts, and manual picks. Once determined, the operational health score can be programmatically compared with a target operational health score to determine whether each particular store is above or below the desired health score value, and this operational health score can be displayed via a graphical user interface (GUI) generated and configured according to exemplary embodiments of the present disclosure.
In accordance with exemplary embodiments, a computer-implemented method is provided for translating physical activities within one or more stores into reference data corresponding to an operational health of the one or more stores. The method includes, in an inventory management system, receiving at a server of the inventory system, store activity data in an electronic format representing physical inventory processing tasks at a plurality of stores and corresponding to operational metrics associated with on-shelf-availability of products within the plurality of stores. The method also includes inputting the store activity data into an inventory rules engine programmed to generate and output a plurality of adoption scores based on the store activity data and one or more inventory rules, wherein each adoption score is a performance grade of one of the plurality of stores with respect to one of the operational metrics. The method also includes inputting the store activity data into an operational rules engine programmed to generate and output an operational health score for each of the plurality of stores, wherein each operational health score is statistically determined based on the plurality of adoption scores associated with one of the plurality of stores and one or more operational rules. The method also includes writing the store activity data, the plurality of adoption scores, and operational health score for each of the plurality of stores into a database. The method also includes constructing a database query via an electronic display device to retrieve from the database at least one of the store activity data, the plurality of adoption scores, or the operational health score. The method also includes transmitting at least one of the store activity data, the plurality of adoption scores, or operational health score from the database to the electronic display device. The method also includes rendering on the electronic display device a graphical user interface programmed to display a plurality of graphical indicators programmatically overlaid on a geographic map, each graphical indicator representative of the operational health score and geographical location of one of the plurality of stores.
In some embodiments, the graphical user interface is further programmed to display an on-shelf-availability score heat map including a plurality of store icons representing the on-shelf-availability score and geographical location of each of the plurality of stores. In some embodiments, the operational health score for each of the plurality of stores corresponds to a weighted sum of a pick completion adoption score (PCAS), a manual counts adoption score (MCAS), a manual pick adoption score (MPAS), an items binned adoption score (IBAS), and a bin accuracy adoption score (BAAS), wherein the PCAS is given most weight and the BAAS is given least weight. In some embodiments, the plurality of graphical indicators are color coded to represent the operational health score of a single store. In some embodiments, the plurality of graphical indicators include a plurality of user interface store icons selectable by a pointing device. In some embodiments, the pointing device includes at least one of a finger, pen, stylus, mouse cursor, or trackpad cursor. In some embodiments, the graphical user interface is further programmed to render, in response to a user selection of one of the store icons, at least one of an operational health score, process adoption score, a graph of operational metric data, an on-shelf-availability score, a process adoption by market, or a weekly adoption score corresponding to the selected store icon. In some embodiments, the plurality of graphical indicators are color coded to represent whether the operational health score corresponding to each store is above or below a threshold operational health score. In some embodiments, the store icons include an operational health score heat map of the plurality of stores. In some embodiments, the graphical user interface is further programmed to display a first listing of stores rated highest by the operational health score and a second listing of stores rated lowest by the operational health score. In some embodiments, the graphical user interface is further configured to zoom in on the heat map in response to a first user interface command, and zoom out on the heat map in response to a second user interface command. In some embodiments, the method further includes filtering the store activity data by data relating to at least one of a business unit, division, region, market, state, store, department, product category, or season.
In accordance with another exemplary embodiment, a system for translating physical activities within one or more stores into reference data corresponding to an operational health of the one or more stores is disclosed. The system includes a server of the inventory system programmed to receive store activity data in an electronic format representing physical inventory processing tasks at a plurality of stores and corresponding to operational metrics associated with on-shelf-availability of products within the plurality of stores; input the store activity data into an inventory rules engine programmed to generate and output a plurality of adoption scores based on the store activity data and one or more inventory rules, wherein each adoption score is a performance grade of one of the plurality of stores with respect to one of the operational metrics. The server is also configured to input the store activity data into an operational rules engine programmed to generate and output an operational health score for each of the plurality of stores, wherein each operational health score is statistically determined based on the plurality of adoption scores associated with one of the plurality of stores and one or more operational rules. The server is also configured to write the store activity data, the plurality of adoption scores, and operational health score for each of the plurality of stores into a database. The system also includes an electronic display device programmed to construct a database query requesting from the database at least one of the store activity data, the plurality of adoption scores, or the operational score. The electronic display device is also configured to receive at least one of the store activity data, the plurality of adoption scores, or the operational score from the database. The electronic display device is also configured to render, via a graphical user interface, a plurality of graphical indicators programmatically overlaid on a geographic map, each graphical indicator representative of the operational health score and geographical location of one of the plurality of stores.
In some embodiments, the graphical user interface is further programmed to display an on-shelf-availability score heat map including a plurality of on-shelf-availability store icons representing the on-shelf-availability score and geographical location of each of the plurality of stores. In some embodiments, the operational health score for each of the plurality of stores corresponds to a weighted sum of a pick completion adoption score (PCAS), a manual counts adoption score (MCAS), a manual pick adoption score (MPAS), an items binned adoption score (IBAS), and a bin accuracy adoption score (BAAS), wherein the PCAS is given most weight and the BAAS is given least weight. In some embodiments, the plurality of graphical indicators are color coded to represent the operational health score of a single store. In some embodiments, the plurality of graphical indicators include a plurality of user interface store icons selectable by a pointing device. In some embodiments, the pointing device includes at least one of a finger, pen, stylus, mouse cursor, or trackpad cursor. In some embodiments, the graphical user interface of the electronic display device is further programmed to render, in response to a user selection of one of the store icons, at least one of an operational health score, a process adoption score, a graph of operational metric data, an on-shelf-availability score, process adoption by market, or a weekly adoption score corresponding to the selected store icon. In some embodiments, the plurality of graphical indicators are color coded to represent whether the operational health score corresponding to each store is above or below a threshold operational health score. In some embodiments, the store icons include an operational health score heat map of the plurality of stores. In some embodiments, the graphical user interface is further programmed to zoom in on the heat map in response to a first user interface command, and zoom out on the heat map in response to a second user interface command. In some embodiments, the graphical user interface of the electronic display device is further programmed to display a first listing of stores rated highest by the operational health score, and display a second listing of stores rated lowest by the operational health score.
In accordance with another exemplary embodiment, a non-transitory computer readable medium storing instructions executable by a processing device, is disclosed, wherein execution of the instructions causes the processing device to implement a method for translating physical activities within one or more stores into reference data corresponding to an operational health of the one or more stores. The method for identifying inventory includes receiving at a server store activity data in an electronic format representing physical inventory processing tasks at the plurality of stores and corresponding to operational metrics associated with on-shelf-availability of products within the plurality of stores. The method also includes inputting the store activity data into an inventory rules engine programmed to generate and output a plurality of adoption scores based on the store activity data and one or more inventory rules, wherein each adoption score is a performance grade of one of the plurality of stores with respect to one of the operational metrics. The method also includes inputting the store activity data into an operational rules engine programmed to generate and output an operational health score for each of the plurality of stores, wherein each operational health score is statistically determined based on the plurality of adoption scores associated with one of the plurality of stores and one or more operational rules. The method also includes writing the store activity data, the plurality of adoption scores, and operational health score for each of the plurality of stores into a database. The method also includes constructing a database query via an electronic display device requesting from the database at least one of the store activity data, the plurality of adoption scores, or the operational health score. The method also includes transmitting at least one of the store activity data, the plurality of adoption scores, or operational health score from the database to the electronic display device. The method also includes rendering a graphical user interface on the electronic display device programmed to display a plurality of graphical indicators programmatically overlaid on a geographic map, each graphical indicator representative of the operational health score and geographical location of one of the plurality of stores.
In some embodiments, the graphical user interface is further programmed to display an on-shelf-availability score heat map including a plurality of on-shelf-availability store icons representing the on-shelf-availability score and geographical location of each of the plurality of stores. In some embodiments, the operational health score for each of the plurality of stores corresponds to a weighted sum of a pick completion adoption score (PCAS), a manual counts adoption score (MCAS), a manual pick adoption score (MPAS), an items binned adoption score (IBAS), and a bin accuracy adoption score (BAAS), wherein the PCAS is given most weight and the BAAS is given least weight. In some embodiments, the plurality of graphical indicators are color coded to represent the operational health score of a single store. In some embodiments, the plurality of graphical indicators include a plurality of user interface store icons selectable by a pointing device. In some embodiments, the pointing device includes at least one of a finger, pen, stylus, mouse cursor, or trackpad cursor. In some embodiments, the graphical user interface is further programmed to render, in response to a user selection of one of the store icons, at least one of an operational health score, process adoption score, a graph of operational metric data, an on-shelf-availability score, a process adoption by market, or a weekly adoption score corresponding to the selected store icon. In some embodiments, the plurality of graphical indicators are color coded to represent whether the operational health score corresponding to each store is above or below a threshold operational health score. In some embodiments, the store icons include an operational health score heat map of the plurality of stores. In some embodiments, the graphical user interface is further programmed to display a first listing of stores rated highest by the operational health score and a second listing of stores rated lowest by the operational health score. In some embodiments, the graphical user interface is further configured to zoom in on the heat map in response to a first user interface command, and zoom out on the heat map in response to a second user interface command. In some embodiments, the method for identifying inventory further includes filtering the store activity data by data relating to at least one of a business unit, division, region, market, state, store, department, product category, or season.
Any combination or permutation of embodiments is envisioned.
The foregoing and other features and advantages provided by the present disclosure will be more fully understood from the following description of exemplary embodiments when read together with the accompanying drawings, in which:
Certain terms are defined below to facilitate understanding of exemplary embodiments.
As used herein, the term “pick completion” means a percentage of picks (e.g., items picked from a storage room and moved onto the sales floor) completed compared to a total picks that were generated by inventory management system (IMS).
As used herein, the term “items binned” means a number of times or actions of putting items back in a storage room bin or returning items to a storage location.
As used herein, the term “manual picks” means a number of picks for items that were generated manually by a sales associate, in contrast to a “system pick” that is generated by an IMS automatically based on one or more business rules.
As used herein, the term “manual count” means an action of counting a quantity of an item in a store when the action is initiated by a sales associate. In contrast, an “automatic count” is initiated by an IMS based on one or more business rules, but the counting can still be completed by a sales associate.
As used herein, the term “bin accuracy” means a score determined by a physical audit of a bin or storage location by a sales associate comparing expected content to actual content.
As used herein, the term “adoption score” means a score assigned to each store based on that particular store's performance with respect to an operational health metric.
Provided herein are methodologies, systems, apparatus, and non-transitory computer-readable media for generating and presenting to a user, via a graphical user interface (GUI), a number of store performance scores that are generated based on store activity data and product information collected from a number of stores or businesses. In exemplary embodiments, the store activity data may be collected using one or more sensors, such as barcode readers and/or RFID readers. The types of store activity data collected and used to generate the store performance scores, also known as operational health scores, may include data relating to various operational metrics that significantly correlate to the on-shelf-availability of products within the stores. Such operational metrics can include, for example, pick completion, items binned, bin accuracy, manual counts, and manual picks. In some embodiments, these metrics can be weighted to reflect their relative impact on the operational health of a store and/or the on-shelf-availability of products within a store. Once generated, the operational health scores can be programmatically compared with a target operational health score to determine whether each particular store is above or below the desired health score value. In alternative embodiments, the operational health score of one store can be programmatically compared with the operational health score of one or more additional stores in order to determine the relative performance of two or more stores.
According to conventional methodologies, a store's performance would have to be analyzed multiple times with respect to each individual operational metric. Such techniques are inefficient in that they do not provide a single score representative of the overall operational health of a store. Exemplary embodiments address this shortcoming in conventional inventory management systems by generating a single operational health score for a store that combines multiple action-tracking or operational health metrics in a weighted distribution to determine the overall health and performance of a retail operation. This operational health score can then be presented on an electronic display device as a graphical indicator on a geographic map. Thus, the operational health of numerous stores can be efficiently displayed and visually compared against the operational health of other stores and/or against a target health score.
In exemplary embodiments, a heat map can be displayed via a GUI that can be generated and configured to include a number of store icons or color coded pins to represent the operational health scores of a number of stores. For example, a store with a health score above the target health score can have a green icon while a store with a health score below the target health score can have an orange or red icon. A similar heat map can be displayed corresponding to the on-shelf-availability score of each store. The health score heat map and on-shelf-availability score heat map can be filtered to include data from a specific business unit, division, region, market, state, store, department, category, etc. The GUI can also display a listing of the stores with the best health scores and a listing of the stores with the worst health scores. The GUI can also allow a user to zoom in or out on the heat map using one or more touch screen gestures or other user input commands.
In some exemplary embodiments, a separate user interface can be provided for each type of inventory identification, for example, a first user interface for providing a store health score heat map, a second user interface for providing a top and bottom stores list, a third user interface for providing a metrics tab, a fourth user interface for providing an on-shelf-availability heat map, and the like. In other exemplary embodiments, a single user interface can be used to perform two or more inventory identification operations.
Exemplary embodiments are described below with reference to the drawings. One of ordinary skill in the art will recognize that exemplary embodiments are not limited to the illustrative embodiments, and that components of exemplary systems, devices and methods are not limited to the illustrative embodiments described below.
Exemplary systems, devices, methods, and non-transitory computer-readable media can be used to define and execute one or more inventory identification operations in which operational health scores are determined for one or more stores based on a number of different operational health metrics. The operational health scores can be presented to a user via a GUI that displays a number of graphical indicators or icons overlaid on a virtual geographic map.
In step 102, an exemplary IMS can be programmed to receive store activity data from a plurality of stores. The store activity data is received in an electronic format and is representative of physical inventory processing tasks at each of the stores and corresponds to operational metrics associated with on-shelf-availability of products within those stores. In exemplary embodiments, the store activity data can correspond to operational metrics such as, bin accuracy, pick completion, manual counts, manual picks, items binned, etc. Certain metrics can include chronologically associated relationships between certain operational health metrics in order to determine whether the desired order of operations is being implemented. The store activity data can be input manually by a sales associate or store manager at a computer terminal within each of the stores, or can be detected automatically or manually by one or more sensors configured to monitor and report certain store activity data. In some embodiments, one or more sensors, such as, a handheld barcode reader, a barcode reader at a point of sale terminal, a RFID reader, etc. may be used to collect store activity data and product information at each of the stores.
In step 104, an exemplary IMS can input store activity data into an inventory rules engine. The inventory rules engine can be programmed to generate an adoption score for each of a plurality of stores based on each operational metric described above and one more inventory rules. For example, in one embodiment the inventory rules engine is programmed to execute inventory rules to determine a pick completion adoption score (PCAS), an items binned adoption score (IBAS), a bin accuracy adoption score (BAAS), a manual counts adoption score (MCAS), a manual picks adoption score (MPAS), or any combination thereof. Each adoption score that is determined for a store can correspond to the store's performance with respect to an operational health metric. For example, a higher PCAS corresponds to a high ratio of completed picks compared to generated picks, a high BAAS corresponds to a high ratio of actual bin contents compared to expected bin contents, a high MCAS corresponds to a low ratio of non-ideal counts (sum of all manual counts performed during peak stocking and/or shopping times) compared to manual counts (sum of all counts performed), a high MPAS corresponds to a high ratio of ideal picks (number of manual picks performed during the ideal picking time period) compared to the total number of manual picks performed, and a high IBAS corresponds to a high ratio of items binned during the ideal binning time period compared to the total number of items binned.
In step 106, an exemplary IMS can generate adoption scores for each operational metric according to the rules or algorithms described below. In some embodiments, the inventory rules engine is programmed to calculate a PCAS, IBAS, MCAS, MPAS, and BAAS according to a series of operational rules or algorithms.
In example embodiments, the inventory rules engine can generate a PCAS according to the following algorithm:
As provided above, the PCAS can be generated based on a Pick_Completion score, which is determined by the ratio of picks worked divided by picks generated. For example, if the ratio of picks worked divided by picks generated is zero, the PCAS can be zero; if the ratio of picks worked divided by picks generated is greater than 0.95, the PCAS can be 10; if the ratio of picks worked divided by picks completed is between 0.95 and 0.90, the PCAS can be 8; if the ratio of picks worked divided by picks completed is between 0.90 and 0.80, the PCAS can be 6; if the ratio of picks worked divided by picks completed is between 0.80 and 0.70, the PCAS can be 5; if the ratio of picks worked divided by picks completed is between 0.70 and 0.60, the PCAS can be 4; and if the ratio of picks worked divided by picks completed is below 0.60, the PCAS can be 3. As will be appreciated, the various Pick_Completion score ranges (e.g., a PCAS of 5 for the range of Pick_Completion scores between 70-80) are provided for illustration purposes only and can be altered based on the characteristics of a particular store, season, region, or any other suitable factor.
In example embodiments, the inventory rules engine generates an IBAS according to the following algorithm:
As provided above, the IBAS can be generated based on the Binned_Adoption score, which is generated by determining the sum of all items binned in a week during the ideal time period for binning items, divided by the sum of all items binned that week, and multiplying that ratio by 100. For example, if the Binned_Adoption score is greater than 70, the IBAS can be 10; if the Binned_Adoption score is between 60 and 70, the IBAS can be 8; if the Binned_Adoption score is between 50 and 60, the IBAS can be 6; if the Binned_Adoption score is between 40 and 50, the IBAS can be 5; if the Binned_Adoption score is between 30 and 40, the IBAS can be 4; if the Binned_Adoption score is below 30, the IBAS can be 2; and if the Binned_Adoption score is zero the IBAS can be zero. As will be appreciated, the various Binned_Adoption score ranges (e.g., a IBAS of 5 for the range of binned adoption scores between 40-50) are provided for illustration purposes only and can be altered based on the characteristics of a particular store, season, region, etc. Likewise, the identification of ideal binned items can vary based on similar factors. For example, in this particular embodiment, the Ideal_Binned score is determined by the number of items binned before 8:00 AM because generally it is impractical for sales associates to spend their time binning items after 8:00 AM. However, this time period can vary depending on the region, product, season, etc.
In example embodiments, the inventory rules engine generates a MPAS according to the following algorithm:
As provided above, the MPAS can be generated based on a Picks_Adoption score, which is generated by determining the sum of all ideal picks performed during a week divided by the sum of all manual picks performed during a week, and multiplying that ratio by 100. For example, if the Picks_Adoption score is greater than 50, the MPAS can be 10; if the Picks_Adoption score is between 40 and 50, the MPAS can be 8; if the Picks_Adoption score is between 30 and 40, the MPAS can be 6; if the Picks_Adoption score is between 20 and 30, the MPAS can be 5; if the Picks_Adoption score is below 20, the MPAS can be 2; and if the Picks_Adoption score is zero, the MPAS can be zero. As will be appreciated, the various Picks_Adoption score ranges (e.g., a MPAS of 5 for the range of Picks_Adoption scores between 20-30) are provided for illustration purposes only and can be altered based on the characteristics of a particular store, season, region, etc. Likewise, the time periods used to identify Ideal_Picks can vary based on similar factors. For example, in this particular embodiment, the Ideal_Picks score is determined by the number of items manually picked between 6:00 AM and 11:00 AM because generally it is impractical for sales associates to spend their time performing manual picks outside of this time period. However, this time period can vary depending on the region, product, season, etc.
In example embodiments, the inventory rules engine generates a MCAS according to the following algorithm:
As provided above, the MCAS can be generated based on a Count_Adoption score, which is generated by determining the sum of all non-ideal counts performed during a week divided by the sum of all manual counts performed during a week, and multiplying that ratio by 100. If the Count_Adoption score is zero, then the MCAS can be zero. When the Count_Adoption score is non-zero, then the scaling of the MCAS may depend on how well a particular store is performing with respect to pick completion. For example, if the pick completion score for a given store is less than or equal to 95, then the following scaling can be used to determine the proper MCAS: if the Count_Adoption score is less than 35, the MCAS can be 10; if the Count_Adoption score is between 35 and 40, the MCAS can be 8; if the Count_Adoption score is between 40 and 45, the MCAS can be 6; if the Count_Adoption score is between 45 and 50, the MCAS can be 5; if the Count_Adoption score is between 50 and 55, the MCAS can be 4, and if the Count_Adoption score is above 55 the MCAS can be 2. However, if a store has a pick completion score greater than 95, then the following scaling can be used to determine the proper MCAS: if the Count_Adoption score is less than 50, the MCAS can be 10; if the Count_Adoption score is between 50 and 55, the MCAS can be 8; if the Count_Adoption score is between 55 and 65, the MCAS can be 6; if the Count_Adoption score is between 65 and 70, the MCAS can be 5; if the Count_Adoption score is between 70 and 75, the MCAS can be 4; and if the Count_Adoption score is greater than 75, the MCAS can be 2.
The various Pick_Adoption score ranges can vary, in some embodiments, depending on the characteristics of a particular store, region, etc. or depending on how well a store performs with respect to other operational health metrics. Likewise, the time frames used to identify Non_Ideal_Picks can vary based on the region, season, or other factors. In this particular example, the rules engine scales the MCAS differently depending on how well the particular store has performed on the Pick_Completion score. Specifically, a store is penalized less for performing non ideal picks if the Pick_Completion score (PCAS) is above 95. This scaling is implemented because, although sales associates are generally discouraged from performing manual counts before 7:00 AM or between 7:00 AM and 11:00 AM, the MCAS will not be decreased as significantly if the store maintains a high PCAS in spite of performing some manual counts during those time periods.
In example embodiments, the inventory rules engine generates a BAAS according to the following algorithm:
As provided above, the BAAS can be generated based on a Bin_Accuracy score, which is a score determined by a physical audit of a bin or storage location comparing expected content to actual content. For example, if the Bin_Accuracy score is greater than 0.90, the BAAS can be 10; if the Bin_Accuracy score is between 0.80 and 0.90, the BAAS can be 9; if the Bin_Accuracy score is between 70 and 80, the BAAS can be 7; if the Bin_Accuracy score is between 60 and 70, the BAAS can be 6; if the Bin_Accuracy score is between 50 and 60, the BAAS can be 5; if the Bin_Accuracy score is below 50, the BAAS can be 3; and if the Bin_Accuracy score is zero, the BAAS can be zero. As will be appreciated, the various Bin_Accuracy score ranges (e.g., a BAAS of 5 for the range of pick completion scores between 50-60) are provided for illustration purposes only and can be altered based on the characteristics of a particular store, season, region, or any other suitable factor. The score ranges, and other weighting values of the algorithms described above can be developed using historical store data and can be updated or refined periodically, in some embodiments.
In step 108, an exemplary IMS can input store activity data into an operational rules engine. The operational rules engine can be programmed to generate an operational health score for each of the plurality of stores corresponding to the previously calculated adoption scores associated with each store.
In step 110, an exemplary IMS can generate an operational health score for each store. In some embodiments, the operational rules engine is programmed to execute operational rules to statistically weight each adoption score and operational health metric based on the overall on-shelf-availability and operational health of a store. For example, if it is determined that pick completion has a much higher impact on on-shelf-availability than bin accuracy, the PCAS can be weighted much heavier than the BAAS when calculating the overall operational health score for each store. The various weighting values can be determined based on historical store data and can be periodically updated or adjusted. In example embodiments, the operational health score (OHS) can be calculated according to equation (1) below.
OHS=W1*PCAS+W2*MCAS+W3*IBAS+W4*MPAS+0.05*BAAS (1)
In the above equation, W1-W4 represent weighting values that are applied to the adoption scores such that some have a greater contribution to the operational health score than other adoption scores. As a non-limiting example, in some embodiments, W1 can be about 0.40, W2 can be about 0.25, W3 can be about 0.15, and W4 can be about 0.15. In step 112, an exemplary IMS 100 can write the store activity data, adoption scores, and operational health scores into a database to store the data in physical memory locations of a computer-readable medium. In some embodiments, the database can be located remotely on one or more servers via a distributed network environment. The database can then store this information so that it can be retrieved from the physical memory of the computer-readable medium and/or can be transmitted to an electronic display device in response to a query that is constructed in response to input from a user. In some embodiments, the store activity data collected via the sensors may be temporarily stored locally at the stores, or may be automatically saved to the database.
In step 114, an exemplary IMS can construct a database query. The database query can be initiated based user inputs received in response to an interaction with GUI(s) rendered on the electronic display device. The database query can include a request, in a database language (e.g., SQL), to the database for either the score activity data, one or more adoption scores, or one or more operational health scores. This database query can be transmitted over a wired or wireless network and can prompt a remote server to access the database, retrieve the requested information from physical memory locations, and transmit the desired information to the electronic display device.
In step 116, an exemplary IMS can transmit the requested information from the database to the electronic display device for incorporation into a GUI rendered by the electronic display device. The information transmitted from the database corresponds to the information requested via the database query described above, and can include store activity data, one or more adoption score, or one or more operational health score, in some embodiments.
In step 118, an exemplary IMS can render a GUI on the electronic display device to display a graphical indicators representative of the operational health scores of each store overlaid on a geographic map. Each graphical indicator can be representative of the operational health score and the geographic location of a particular store. In some embodiments, the GUI is also programmed to display an on-shelf-availability score heat map including a number of on-shelf-availability store icons that represent the on-shelf-availability score and geographical location of a number of stores. In some embodiments, the geographical indicators can be color coded to represent the operational health score of the stores. For example, a store with an above-average operational health score can be assigned a green store icon in the GUI, a store with an average operational health score can be assigned a yellow store icon in the GUI, and a store with a below-average operational health score can be assigned a red store icon in the GUI. The GUI can be a touch-screen UI, in some embodiments, and can include a capacitive or resistive touch sensitive display. The graphical indicators displayed on the geographical map can be, in some embodiments, selectable icons that a user can select via a pointing device such as a finger, pen, stylus, mouse cursor, or trackpad cursor. As discussed further in reference to
The system 200 can include a front end 222 that is in communication with the back end to facilitate the generation of store activity data corresponding to operational health metrics. In exemplary embodiments, the front end can include one or more computer terminals 224, sensors 226, bins 228, and sales floor displays 230. The one or more computer terminals can be configured to facilitate communication with the back end 202 and to receive inputs corresponding to store activity data. The sensors 226 can be positioned in proximity to the bins 228 or displays 230, and can detect store activity data, such as, when items are placed in bins or on displays, what types of items are on bins or displays, or any other relevant store activity or inventory data. For example, in some embodiments the sensors can be radiofrequency identification (RFID) readers that can monitor RFID tags associated with and/or affixed to items/products in the bins 228 and/or displays 230. When an employee removes an item from a bin or display 230 (places an item in a bin or display), the RFID reader can sense the RFID tag affixed to the item and can generate an electronic signal that is transmitted to the one or more terminals 224 as store activity data, and the one or more terminals can associate the store activity data with one or more operational metrics, e.g., one of a pick completion, an item binned, bin accuracy, a manual count, and/or a manual pick. In other embodiments, the sensors can be barcode scanners, either at a point of sale terminal or within a handheld device carried by a sales associate, that can scan and record product data and/or store activity data. The front end 222 can be in communication with the back end 202 via a wired or wireless network, and the store activity data collected at the front end 222 may be transferred, either automatically or manually, to the back end 202 to be stored in one of the raw data matrices 216.
Virtualization can be employed in the computing device 300 so that infrastructure and resources in the computing device can be shared dynamically. A virtual machine 314 can be provided to handle a process running on multiple processors so that the process appears to be using only one computing resource rather than multiple computing resources. Multiple virtual machines can also be used with one processor.
Memory 306 can include a computer system memory or random access memory, such as DRAM, SRAM, EDO RAM, and the like. Memory 306 can include other types of memory as well, or combinations thereof.
A user can interact with the computing device 300 through a visual display device 318, such as a touch screen display or computer monitor, which can display one or more user interfaces 214 that can be provided in accordance with exemplary embodiments, for example, the exemplary interfaces illustrated in
The computing device 300 can also include one or more storage devices 324, such as a hard-drive, CD-ROM, or other computer readable media, for storing data and computer-readable instructions and/or software, such as the inventory rules engine 204, operational rules engine 208, and/or user interfaces 212, that implement exemplary embodiments of the inventory management system 200 as taught herein or portions thereof. Exemplary storage device 324 can also store one or more databases for storing any suitable information required to implement exemplary embodiments. The databases can be updated by a user or automatically at any suitable time to add, delete or update one or more items in the databases. Exemplary storage device 324 can store one or more databases 326 for storing store activity data, adoption scores, operational health scores, any suitable maps or mapping information on one or more geographical areas where the stores of interest can be located, and any other data/information used to implement exemplary embodiments of the inventory management system.
The computing device 300 can include a network interface 312 configured to interface via one or more network devices 322 with one or more networks, for example, Local Area Network (LAN), Wide Area Network (WAN) or the Internet through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (for example, 802.11, T1, T3, 56kb, X.25), broadband connections (for example, ISDN, Frame Relay, ATM), wireless connections, controller area network (CAN), or some combination of any or all of the above. The network interface 312 can include a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing the computing device 300 to any type of network capable of communication and performing the operations described herein. Moreover, the computing device 300 can be any computer system, such as a workstation, desktop computer, server, laptop, handheld computer, tablet computer (e.g., the iPad® tablet computer), mobile computing or communication device (e.g., the iPhone® communication device), or other form of computing or telecommunications device that is capable of communication and that has sufficient processor power and memory capacity to perform the operations described herein.
The computing device 300 can run any operating system 316, such as any of the versions of the Microsoft® Windows® operating systems, the different releases of the Unix and Linux operating systems, any version of the MacOS® for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, any operating systems for mobile computing devices, or any other operating system capable of running on the computing device and performing the operations described herein. In exemplary embodiments, the operating system 316 can be run in native mode or emulated mode. In an exemplary embodiment, the operating system 316 can be run on one or more cloud machine instances.
In describing exemplary embodiments, specific terminology is used for the sake of clarity. For purposes of description, each specific term is intended to at least include all technical and functional equivalents that operate in a similar manner to accomplish a similar purpose. Additionally, in some instances where a particular exemplary embodiment includes a plurality of system elements, device components or method steps, those elements, components or steps can be replaced with a single element, component or step. Likewise, a single element, component or step can be replaced with a plurality of elements, components or steps that serve the same purpose. Moreover, while exemplary embodiments have been shown and described with references to particular embodiments thereof, those of ordinary skill in the art will understand that various substitutions and alterations in form and detail can be made therein without departing from the scope of the invention. Further still, other aspects, functions and advantages are also within the scope of the invention.
Exemplary flowcharts are provided herein for illustrative purposes and are non-limiting examples of methods. One of ordinary skill in the art will recognize that exemplary methods can include more or fewer steps than those illustrated in the exemplary flowcharts, and that the steps in the exemplary flowcharts can be performed in a different order than the order shown in the illustrative flowcharts.
This application claims benefit of and priority to U.S. provisional application Ser. 62/074,898, filed. Nov. 4, 2014, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62074898 | Nov 2014 | US |