The field relates generally to information processing systems, and more particularly to supply planning management within such information processing systems.
Many original equipment manufacturers (OEMs) utilize a “configure to order” (CTO) model with respect to enabling customers to place orders for equipment. In the CTO model, a customer can configure their equipment for purchase in a customized manner, i.e., specifying the equipment component by component starting from a base package. That is, the OEM makes available a base package and the customer then adds components to the base package to customize the equipment. Each customer order then goes to the OEM's manufacturing group to be separately built.
An alternative ordering model is a “finished goods assembly” (FGA) model. In the FGA model, rather than enabling the customer to specify components for the equipment for purchase, the equipment is pre-configured typically with no component customization permitted by the customer. Typically, with the FGA model, the OEM utilizes a merge center where the equipment is assembled, and then shipped to the customer from the merge center.
Illustrative embodiments provide techniques for automated supply planning management within information processing systems.
For example, in an illustrative embodiment, a method comprises the following steps. The method obtains a first data set representing historical data associated with a non-customizable system, a second data set representing historical data associated with a customizable base system, and a third data set representing historical data associated with components used to customize the customizable base system. The method pre-processes at least portions of the first data set, the second data set and the third data set. The method performs forecasting processes respectively on the pre-processed portions of the first data set, the second data set and the third data set. The method correlates results of the forecasting processes and modifies the forecasting results associated with the third data set based on variations in one or more of the forecasting results associated with the first data and the forecasting results associated with the second data. The method generates a supply plan for components used to customize the customizable base system based on the modified forecasting results associated with the third data set.
Further illustrative embodiments are provided in the form of a non-transitory computer-readable storage medium having embodied therein executable program code that when executed by a processor causes the processor to perform the above steps. Still further illustrative embodiments comprise an apparatus with a processor and a memory configured to perform the above steps.
Advantageously, illustrative embodiments provide automated supply planning management that takes into account historical data for components obtained for customizing the base system in the context of a CTO-based ordering model. One or more illustrative embodiments utilize a machine learning algorithm to process data.
These and other illustrative embodiments include, without limitation, apparatus, systems, methods and computer program products comprising processor-readable storage media.
While the CTO model gives OEM customers the most flexibility, there can be a significant number of permutations of component combinations that the customer can select from for the equipment. For example, if the equipment is a computer system such as a desktop or laptop, the customer may have options to choose from including different random access memory capacities (e.g., 4 GB RAM, 8 GB RAM or 16 GB RAM capacity), different hard drive capacities (e.g., 500 GB or 1 TB capacity), different graphic cards, etc. Thus, the base system or structure (sometimes referred to as the “base mod”) is the same from customer to customer, however, each customer will have different added components based on their own equipment needs. Because there are not usually any discernible ordering patterns in the CTO model, it is difficult to manage supply planning.
On the other hand, the FGA model enables relatively accurate demand planning since the systems are pre-configured and the OEM knows how many systems need to be manufactured and stocked. An OEM can review the FGA sales history by a given region to predict (forecast) demand with a seasonality input, resulting in reasonably accurate forecasting (e.g., 80-85% accuracy). Based on this forecasting, the OEM performs supply planning (e.g., what raw materials (e.g., components) are needed to be purchased and stocked for building the equipment). A bill of materials (BOM) can then be generated based on the forecasted FGAs, and purchase orders issued by the OEM to suppliers to purchase the forecasted quantity of components.
In contrast, with the CTO model, there is no pattern of the CTO purchase because of its dynamic nature. The OEM can forecast the number of total systems based on sales history (similar to FGA model) and therefore adequately predict the number of base packages (base mods) to order, but it is difficult to predict all the customer variations selected on top of the base packages. Nonetheless, the OEM still needs to purchase raw materials, e.g., components associated with customer-selectable options, and keep them available at the manufacturing location.
One manual approach to address this issue with the CTO model is to take the base package forecasting and “attach” different customer combinations as percentages based on subject matter expert (SME) knowledge, e.g., 4 GB RAM 35%, 8 GB RAM 45%, 16 GB RAM 20%, etc. This approach is called attach rate planning. For example, the attach rate can be calculated as a percentage by dividing the number of systems with the given component (e.g., 4 GB RAM, 8 GB RAM, or 16 GB RAM) by the number of systems, and then multiplying the quotient by 100. So with an attach rate of 35% for the 8 GB RAM component, this means that the SME is guessing that 35% of the systems that will be sold will include the 8 GB RAM option. Unfortunately, due to this manual process, the forecast accuracy of CTO products is typically less than 40%.
Illustrative embodiments overcome the above and other challenges associated with supply planning for CTO systems by taking into account historical data for components obtained for customizing the base package in the context of a CTO-based ordering model and using one or more machine learning (ML) algorithms. An ML algorithm is typically considered part of artificial intelligence (AI) technology where the computer-implemented algorithm learns from input data so as to improve subsequent output data.
Note that “demand planning” is forecasting customer demand while “supply planning” is the management of the inventory supply to meet the targets of the demand forecast. That is, supply planning seeks to fulfill the demand plan while meeting the OEM goals (e.g., financial and/or service objectives). A main challenge therefore is that while an SME can forecast customer demand relatively accurately with respect to an overall number of systems sold, the SME cannot adequately forecast the multitude of component customizations that will be selected by the customers in a CTO-based ordering environment, thus making it difficult to perform reasonably accurate supply planning.
Accordingly, AI-based CTO supply planning management server 110 is configured to accurately predict attaching components for a CTO system, rather than conventional SME-based percentage attach rate planning. Advantageously, AI-based CTO supply planning management server 110 is configured to perform automated attach rate planning by determining ratios/percentages in which the components will be attached to the CTO system by accounting for the total purchases not just the sales. In this manner, for example, accuracy can increase from less than 40% to 75 to 85% and the amount of safety stock (e.g., extra components purchased to accommodate for inaccurate planning) can be reduced.
Illustrative embodiments make use of the actual purchase history of raw material components from different vendors. More particularly, illustrative embodiments use a combination of a base model demand planning forecast and ML-generated results of the actual raw material purchase to calculate how much raw material will be needed for the CTO purchase for a given future time horizon (e.g., next few weeks, months, quarters, etc.).
As shown in
“FGA sales actuals” refers to the sales data (e.g., quantities) for computer systems purchased in a “finished goods assembly” ordering system. Recall that, in the FGA ordering system, rather than enabling the customer to specify components for the equipment for purchase, the equipment is pre-configured typically with no component customization permitted by the customer. Thus, the computer system comes preconfigured with a housing, motherboard, power supply, RAM, graphics card, hard drive, etc. As such, data 202 represents sales data for FGA-based computer systems purchased over a predetermined historical time period.
“Base mod sales actuals” refers to sales data (e.g., quantities) for the components that constitute the base structure (base mod) of the computer systems purchased in a CTO ordering system. Thus, the base mod for the computer system comes with such standard components as a housing, a motherboard and a power supply. As such, data 204 represents sales data for the base mod portion of CTO-based computer systems purchased over the predetermined historical time period.
“Actual raw material purchase” refers to sales data (e.g., quantities) for the components that constitute the customizations of the base mod selected by the customers for the computer systems purchased in a CTO ordering system. Thus, the raw material refers to the selectable components such as RAM, a graphics card and a hard drive. As such, data 206 represents sales data for the customizable portions of the CTO-based computer systems purchased over the predetermined historical time period.
Note that the sales data (202, 204, and 206) considered as input for process 200 can be filtered not only by the predetermined time period of interest but also based on the sales regions and/or subregions defined by the OEM. In some embodiments, AI-based CTO supply planning management server 110 queries the one or more databases 120 for the sales data for the predetermined historical time period and specific regions/subregions of interest. In addition, as shown in process flow 200 of
In step 214, process flow 200 identifies the components used for the FGA systems (i.e., finds attached component required and classified by region/subregion). In some embodiments, this can be done by AI-based CTO supply planning management server 110 digitally analyzing a bill of materials (BOM) for the standard FGA system to identify the subject components.
In step 216, process flow 200 finds the purchase history for the raw materials used for the CTO system. More particularly, this can be calculated by AI-based CTO supply planning management server 110 subtracting the quantities of material used for the FGA system (e.g., 252 in
Next in process flow 200, forecasts are obtained for the FGA system in step 218, the CTO raw material in step 220, and the base mod system in step 222 (e.g., 254 and 255 in
After the forecasting for the FGA system in step 218, the results are applied to a linear regression algorithm in step 238 to smoothen that predicted data set, for example, by statistically eliminating outliers from the data set to make patterns more noticeable. A similar linear regression smoothing is performed in step 240 on the predicted results from the base mode forecasting in step 222. Linear regression is considered a machine learning algorithm.
In step 242, the forecast results from FGA forecasting step 218 (smoothed in step 238), CTO raw material forecasting step 220, and base mod forecasting step 222 (smoothed in step 240) are applied to a correlation algorithm. In some embodiments, the correlation algorithm takes the median of the changes (variations) that occurred for the FGA forecast and the base mod forecast, and changes the percentage in the CTO forecast accordingly. For example, assume that the FGA forecast data is less than the actual current data, then this same variation will be applied for the CTO forecast and the CTO forecast data is equally reduced. The result of the correlation step 242 is the predicted material required based on the CTO forecast (referred to as the attach rate) which becomes the CTO supply plan (e.g., supply plan 130 in
Advantageously, as illustratively described above, illustrative embodiments provide AI-based (machine learning) CTO supply planning management based on the actual purchase and correlated CTO, base mod and FGA forecasts, rather than the current demand planning and SME % of attach Rate. As such, supply planning accuracy increases from about 30-40% to about 75-80%.
Step 302 obtains a first data set representing historical data associated with a non-customizable system (e.g., 202), a second data set representing historical data associated with a customizable base system (e.g., 206), and a third data set representing historical data associated with components used to customize the customizable base system (e.g., 204). Step 304 pre-processes at least portions of the first data set, the second data set and the third data set (e.g., 208 through 216). Step 306 performs forecasting processes respectively on the pre-processed portions of the first data set, the second data set and the third data set (e.g., 218, 222, 220).
Step 308 then correlates results of the forecasting processes and modifies the forecasting results associated with the third data set based on variations in one or more of the forecasting results associated with the first data and the forecasting results associated with the second data (e.g., step 242). Step 310 generates a supply plan for components used to customize the customizable base system based on the modified forecasting results associated with the third data set (step 244).
Illustrative embodiments have been described herein with reference to exemplary information processing systems and associated computers, servers, storage devices and other processing devices. It is to be appreciated, however, that embodiments are not restricted to use with the particular illustrative system and device configurations shown. Accordingly, the term “information processing system” as used herein is intended to be broadly construed, so as to encompass, for example, processing platforms comprising cloud and/or non-cloud computing and storage systems, as well as other types of processing systems comprising various combinations of physical and/or virtual processing resources. An information processing system may therefore comprise, by way of example only, at least one data center or other type of cloud-based system that includes one or more clouds hosting tenants that access cloud resources. Cloud-based systems may include one or more public clouds, one or more private clouds, or a hybrid combination thereof.
By way of one example,
The processing platform 400 in this embodiment comprises a plurality of processing devices, denoted 402-1, 402-2, 402-3, . . . , 402-N, which communicate with one another over network(s) 404. It is to be appreciated that the methodologies described herein may be executed in one such processing device 402, or executed in a distributed manner across two or more such processing devices 402. It is to be further appreciated that a server, a client device, a computing device or any other processing platform element may be viewed as an example of what is more generally referred to herein as a “processing device.” As illustrated in
The processing device 402-1 in the processing platform 400 comprises a processor 410 coupled to a memory 412. The processor 410 may comprise a microprocessor, a microcontroller, an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other type of processing circuitry, as well as portions or combinations of such circuitry elements. Components of systems as disclosed herein can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device such as processor 410. Memory 412 (or other storage device) having such program code embodied therein is an example of what is more generally referred to herein as a processor-readable storage medium. Articles of manufacture or computer program products comprising such computer-readable or processor-readable storage media are considered embodiments of the invention. A given such article of manufacture may comprise, for example, a storage device such as a storage disk, a storage array or an integrated circuit containing memory. The terms “article of manufacture” and “computer program product” as used herein should be understood to exclude transitory, propagating signals.
Furthermore, memory 412 may comprise electronic memory such as random-access memory (RAM), read-only memory (ROM) or other types of memory, in any combination. The one or more software programs when executed by a processing device such as the processing device 402-1 causes the device to perform functions associated with one or more of the components/steps of system/methodologies in
Processing device 402-1 also includes network interface circuitry 414, which is used to interface the device with the networks 404 and other system components. Such circuitry may comprise conventional transceivers of a type well known in the art.
The other processing devices 402 (402-2, 402-3, . . . 402-N) of the processing platform 400 are assumed to be configured in a manner similar to that shown for computing device 402-1 in the figure.
The processing platform 400 shown in
Also, numerous other arrangements of servers, clients, computers, storage devices or other components are possible in processing platform 400. Such components can communicate with other elements of the processing platform 400 over any type of network, such as a wide area network (WAN), a local area network (LAN), a satellite network, a telephone or cable network, or various portions or combinations of these and other types of networks.
Furthermore, it is to be appreciated that the processing platform 400 of
As is known, virtual machines are logical processing elements that may be instantiated on one or more physical processing elements (e.g., servers, computers, processing devices). That is, a “virtual machine” generally refers to a software implementation of a machine (i.e., a computer) that executes programs like a physical machine. Thus, different virtual machines can run different operating systems and multiple applications on the same physical computer. Virtualization is implemented by the hypervisor which is directly inserted on top of the computer hardware in order to allocate hardware resources of the physical computer dynamically and transparently. The hypervisor affords the ability for multiple operating systems to run concurrently on a single physical computer and share hardware resources with each other.
It was noted above that portions of the computing environment may be implemented using one or more processing platforms. A given such processing platform comprises at least one processing device comprising a processor coupled to a memory, and the processing device may be implemented at least in part utilizing one or more virtual machines, containers or other virtualization infrastructure. By way of example, such containers may be Docker containers or other types of containers.
The particular processing operations and other system functionality described in conjunction with
It should again be emphasized that the above-described embodiments of the invention are presented for purposes of illustration only. Many variations may be made in the particular arrangements shown. For example, although described in the context of particular system and device configurations, the techniques are applicable to a wide variety of other types of data processing systems, processing devices and distributed virtual infrastructure arrangements. In addition, any simplifying assumptions made above in the course of describing the illustrative embodiments should also be viewed as exemplary rather than as requirements or limitations of the invention.