The present application is generally directed to vehicle operations and more specifically, to tracking activities of vehicles such as trucks in a mining operation.
In open pit mining, huge quantities of ore and waste material are transported using large equipment. The major components of material handling are trucks, shovels and loaders. Trucks, depending on the size and manufacturer, are often organized into fleets. Depending on the material type, trucks may haul material from either shovels or loaders to the following destinations: dump areas/sites in the case of waste, and stockpiles or processing plants in the case of ore. Besides hauling, other main productive activities are material dumping, trucks driving empty, trucks loading, trucks spotting at shovel, and so on.
Due to the stochastic nature of activity durations and poor scheduling, non-productive activities (NPT) such as truck queuing and shovel starving (e.g., waiting for trucks to be loaded) are present. In order to compete in the market and have sustainable and economical mining operations, companies attempt to improve their efficiency and reduce operational cost by decreasing the time spent in these non-productive activities. Truck assignment as a part of a dispatching system has a role to determine the number of trucks from each fleet that should be operating between any particular pair of loading (loaders, shovels) and dumping locations (dump areas) to meet production requirements. Material transportation can represent up to 40% of operating costs and hence reducing NPT in these systems can lead to savings for a mine operation.
Depending on the material type, trucks usually haul from either shovels or loaders to the following destinations: dump area in the case of waste, and stockpile or processing plant in the case of ore. Besides hauling, other main productive activities are material dumping, trucks driving empty, trucks loading, trucks spotting at shovel. Accurate predictions of activity times for these activities will result in better operational planning (truck assignment) as well as in better decision in dynamic dispatch of the trucks.
Example implementations described herein are directed to system that provides more accurate predictions, which may lead to reduction in NPT and lower the cost of production.
Aspects of the present disclosure can include a method for managing a plurality of vehicles. The method can involve managing information associated with an activity from the plurality of vehicles, and a plurality of predictive models, wherein each of the plurality predictive models is constructed based on one or more subsets of the information; for an activity associated with a first vehicle from the plurality of vehicles, determining which of the plurality of predictive models are relevant to the activity of the first vehicle, assigning a weight to each of the plurality of predictive models based on the activity, relevancy, one or more parameters of the first vehicle and the information stored in the memory; aggregating the weighted predictive models; and generating an estimation for activity time of the activity for the first vehicle based on the aggregation.
Aspects of the present disclosure further include a non-transitory computer readable medium, storing instructions for executing a process for managing a plurality of vehicles. The instructions can include managing information associated with an activity from the plurality of vehicles, and a plurality of predictive models, wherein each of the plurality predictive models is constructed based on one or more subsets of the information; for an activity associated with a first vehicle from the plurality of vehicles, determining which of the plurality of predictive models are relevant to the activity of the first vehicle, assigning a weight to each of the plurality of predictive models based on the activity, relevancy, one or more parameters of the first vehicle and the information stored in the memory; aggregating the weighted predictive models; and generating an estimation for activity time of the activity for the first vehicle based on the aggregation.
Aspects of the present disclosure include an apparatus, configured to manage a plurality of vehicles. The apparatus can include a memory, configured to store information associated with an activity from the plurality of vehicles, and a plurality of predictive models, wherein each of the plurality predictive models is constructed based on one or more subsets of the information; and a processor, configured to, for an activity associated with a first vehicle from the plurality of vehicles, determine which of the plurality of predictive models are relevant to the activity of the first vehicle, assign a weight to each of the plurality of predictive models based on the activity, relevancy, one or more parameters of the first vehicle and the information stored in the memory; aggregate the weighted predictive models; and generate an estimation for activity time of the activity for the first vehicle based on the aggregation.
Aspects of the present disclosure include a system, configured to manage a plurality of vehicles. The system can include means for storing information associated with an activity from the plurality of vehicles, and a plurality of predictive models, wherein each of the plurality predictive models is constructed based on one or more subsets of the information; and, for an activity associated with a first vehicle from the plurality of vehicles, means for determining which of the plurality of predictive models are relevant to the activity of the first vehicle, means for assigning a weight to each of the plurality of predictive models based on the activity, relevancy, one or more parameters of the first vehicle and the information stored in the memory; means for aggregating the weighted predictive models; and means for generating an estimation for activity time of the activity for the first vehicle based on the aggregation.
The following detailed description provides further details of the figures and example implementations of the present application. Reference numerals and descriptions of redundant elements between figures are omitted for clarity. Terms used throughout the description are provided as examples and are not intended to be limiting. For example, the use of the term “automatic” may involve fully automatic or semi-automatic implementations involving user or administrator control over certain aspects of the implementation, depending on the desired implementation of one of ordinary skill in the art practicing implementations of the present application. Truck assignment and truck distribution may be used interchangeably. Example implementations described herein may be used singularly, or in combination other example implementations described herein, or with any other desired implementation.
Example implementations described herein are directed to providing prediction of activity times of vehicles, which can be obtained through the following implementations. For each activity, relevant graph structures are defined for learning different predictive models. The structure defines multiple subsets over the data and learns different models for each of the subsets. Further, outliers are removed from operational data related to activity times. Example implementations utilize a combination of single and multi-dimensional outlier detection techniques to remove the outliers. Then, example implementations learn different machine learning models based on the defined structure. The predictions of the predictive models are integrated in real time through the use of time varying weights, which takes into account that some models may not provide predictions all the time.
Example implementations involve obtaining a solution for the prediction of activity times as an integration of multiple different predictors based on a pre-defined graphical structure. In example implementations, the difference in predictors comes from either machine learning model structure or the data selected to learn machine learning model. Predictors can be integrated using online weighted average where weights are updated after each new observation.
Example implementations may utilize machine learning models and historical data to predict the duration of activities and parameters of activity duration distribution. Parameters of the distributions of activity durations for use in the prediction of activity times, can be obtained as output of a machine learning model which takes into account several variables such as terrain, weather, type of truck, and so on, depending on the desired implementation.
Through the use of the example implementations described herein, accurate activity time predictions can be obtained to improve dispatching and reduce cost of mine operations.
Although example implementations described herein are described with respect to trucks in a mining operation, the present disclosure is not limited thereto and the example implementations can be extended to any vehicle that conducts any activities that are subject to scheduling. Such vehicles can include shovels, railcars, automobiles, boats, airplanes, and so on, depending on the desired implementation. Such activities can include delivery offloading, material or personnel loading, hauling, refueling, maintenance, and so on, depending on the desired implementation.
As illustrated in
The outputs of the machine learning models as well as data from the database are used as input parameters for optimization modules 301. The outputs of both simulation 302 and predictors 303 along with the data from database 304 can be used in the stochastic optimization that may generate additional predictors or weights for the forecasting of activity times and optimized scheduling. The obtained vehicle activity time forecasts and optimized scheduling can be displayed on a dashboard 305 so that a dispatcher 306 can determine the forecasted activity times and scheduling for the vehicles managed by the vehicle scheduling system. As illustrated in the system of
In example implementations for model learning and outlier removal for machine learning 303, multiple models can be created for each of graphical structures and each activity. Examples of such models are moving average, exponential smoothing, linear and nonlinear regression, and so on. Denote each of these models as Mia(Xia,s), ia=1,2, . . . ; a ∈ A={loading, hauling, dumping, empty, spotting, . . . } where Xia,s is set of explanatory variables for a particular model and activity on particular subset s. The set of these variables should be kept the same at each node in the tree given the model. For example, if linear regression is utilized for predicting the hauling activity duration, relevant set of explanatory variables for the model might include distance, route elevation difference, weather, shift, and so on. Also, the same set of explanatory variables, if it provides the best fit, should be used in each of the nodes. From a prediction perspective, the explanatory variables can be chosen such that they are obtainable in the near future and can be consumed by the model. For example, shift can be an important explanatory variable for activity durations because conditions may differ during the night versus the day. Weather data can also be important, since in case of rain or high winds, vehicles may move slower than usual. For each of the models and each data subset based on the graphical structure, outliers can be detected and removed assuming that there is enough data to learn the models after outlier removal. Examples of predictive models in Fleet Management Systems can include moving average and exponential smoothing. Example implementations facilitate the application of any of these models through the following of the graphical structure.
In example implementations, there are multiple predictors over multiple subsets. Each of predictors provides its own estimate of the activity time, but in some cases prediction may not be possible for some of the predictors (e.g. predictor directed to a first type of vehicle model may not be predictive for a different vehicle model). Thus, example implementations merge predictions into a single value in online fashion.
Denote wia,s weight for each of the predictors and subset (denoted as s) and yia,s predictions of each model and subset. The final prediction can be defined as weighted average of all predictions:
ya=Σia,swia,syia,s where Σia,swia,s=1
Example implementations define wia,s. That is, example implementations are directed to assigning a weight to each of the predictors and subsets based on historical performance. Historical performance is measured by the loss function value for each predictor and subset over last k observations. Using the loss, weights can be defined as:
where Iia,s is binary indicator if predictor is provides prediction at subset s. Constant c is utilized to scale the loss function and can be determined as the standard deviation of the historical activity times over the last K observations. Loss function denoted as l can be defined as quadratic loss:
however, any other loss function can be utilized depending on the desired implementation. Iia,s,k is binary indicator if predictor ia provided prediction at subset s, at k-th closest observation in the past. Also, different weights can be utilized for different historical observations (e.g. closer observation are weighted as higher importance) using the term αk defined as
αk=exp(−α*k)
where α is discounting factor which is application specific.
In example implementations, the machine learning model for activity durations are built to utilize as much relevant data as needed. Depending on activity, explanatory variables can be obtained from truck activity, topology, and truck details based on information stored in the memory of the computer system. Such variables can include shift information, weather data, route characteristics, vehicle health data such as original equipment manufacturer (OEM) data and so on. For the machine learning model to learn to predict each activity duration, the durations are provided in vehicle activity information 502-03.
Model structures 502-04 can store the structures that relate subsets to the corresponding activity as illustrated in
In example implementations described herein, management computer 102 can be configured to manage a plurality of vehicles, such as a truck fleet or mining trucks in a mining operation. The memory 502 can be configured to store information associated with an activity from the plurality of vehicles such as vehicle information 502-01, topology information 502-02, and vehicle activity information 502-03, along with a plurality of predictive models managed by model structures 502-04. Each of the plurality of predictive models can be constructed based on one or more subsets of the information as illustrated in
Processor 501 can be in the form of hardware processors that are configured to a processor, configured to, for an activity associated with a first vehicle from the plurality of vehicles, determine which of the plurality of predictive models are relevant to the activity of the first vehicle, assign a weight to each of the plurality of predictive models based on the activity, relevancy, and one or more parameters of the first vehicle and the information stored in the memory 502, aggregate the weighted predictive models; and generate an estimation for activity time of the activity for the first vehicle based on the aggregation as illustrated in
Processor 501 can also be configured to assign the weight to each of the plurality of predictive models based on the recency of use for the predictive model and error margin, as described in
In an example implementation, data is transmitted from a vehicle V1 to streaming engine 300 at a given point in time. The data is fed into the predictor models of machine learning 303, which can include a predictor for a first vehicle model MOD1901, a predictor for a second vehicle model MOD2902, a predictor for a first type of hauling size HAU1903, and so on. Predictor models can be constructed for vehicle model, hauling size, and other parameters, depending on the desired implementation.
At 911, the flow assigns a weight to each of the predictive models based on the activity, relevancy, and parameters of the vehicle. Depending on the desired implementation, predictive models determined not to be relevant can be assigned a weight of zero. For example, a vehicle being a model type of MOD1 may have a weight of zero for the models directed to the vehicle model type MOD2, MOD3 and so on. For normalization purposes, the sum of all of the weights can be 1 to determine the influence of each of the predictors. In an example implementation, relevancy can be determined based on a threshold, whereupon a relevancy score falling below the threshold will be considered not to be relevant. In another example implementation, relevancy can be normalized to be used as weights in the flow at 912. The determination of relevancy can be implemented in any manner according to the desired implementation, and is not particularly limited to any implementation.
Weights may also be modified based on the recency of the data used for the predictive model. For example, the predictive models utilized for the integration 904 may not all be utilizing the most recent data from the vehicle V1900, due to the data not being sent from V1 or for other reasons. In such cases, the relevant predictive models that incorporate data may not utilize the most recent data when they are incorporated for the integration 904. For example, at a window of time T, vehicle V1900 may transmit data indicating the present location of the vehicle without any information regarding current weather conditions. The predictive models related to the vehicle model and the location of the vehicle can utilize the most recent data, however, the predictive model for the weather conditions may not have the most recent data (e.g. last available data is T-x time frames ago). Thus when aggregation is conducted, the predictive models utilizing more recent data can be weighted higher than the predictive models utilizing less recent data. The adjustment of the weighting due to recency can be conducted in accordance with the desired implementation, and is not particularly limited. Further, weights may be adjusted based on the expected error for the predictive model. The error can be determined after results of the activity are received, as illustrated in the flow of
At 912, the flow aggregates the weighted predictive models based on the assigned weights. As shown at 940 from
At 1002, the flow generates additional predictive models for new subsets, if necessary. Such situations can occur if a new vehicle is introduced to the system that has parameters that are different from the rest of the fleet (e.g. new vehicle model, new hauling capacity, etc.) and is not in the database of the other vehicles of the system. In such cases, relevant predictors are selected for making the predictions through the use of the flow as illustrated in
At 1003, the prediction can be updated based on the new results through the execution of the flows as illustrated in
In an example implementation of a control system involving the updated predictions from the flow of
In another example implementation of a control system involving the updated predictions from the flow of
Finally, some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations within a computer. These algorithmic descriptions and symbolic representations are the means used by those skilled in the data processing arts to convey the essence of their innovations to others skilled in the art. An algorithm is a series of defined steps leading to a desired end state or result. In example implementations, the steps carried out require physical manipulations of tangible quantities for achieving a tangible result.
Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” “displaying,” or the like, can include the actions and processes of a computer system or other information processing device that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system's memories or registers or other information storage, transmission or display devices.
Example implementations may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include one or more general-purpose computers selectively activated or reconfigured by one or more computer programs. Such computer programs may be stored in a computer readable medium, such as a computer-readable storage medium or a computer-readable signal medium. A computer-readable storage medium may involve tangible mediums such as, but not limited to optical disks, magnetic disks, read-only memories, random access memories, solid state devices and drives, or any other types of tangible or non-transitory media suitable for storing electronic information. A computer readable signal medium may include mediums such as carrier waves. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Computer programs can involve pure software implementations that involve instructions that perform the operations of the desired implementation.
Various general-purpose systems may be used with programs and modules in accordance with the examples herein, or it may prove convenient to construct a more specialized apparatus to perform desired method steps. In addition, the example implementations are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the example implementations as described herein. The instructions of the programming language(s) may be executed by one or more processing devices, e.g., central processing units (CPUs), processors, or controllers.
As is known in the art, the operations described above can be performed by hardware, software, or some combination of software and hardware. Various aspects of the example implementations may be implemented using circuits and logic devices (hardware), while other aspects may be implemented using instructions stored on a machine-readable medium (software), which if executed by a processor, would cause the processor to perform a method to carry out implementations of the present application. Further, some example implementations of the present application may be performed solely in hardware, whereas other example implementations may be performed solely in software. Moreover, the various functions described can be performed in a single unit, or can be spread across a number of components in any number of ways. When performed by software, the methods may be executed by a processor, such as a general purpose computer, based on instructions stored on a computer-readable medium. If desired, the instructions can be stored on the medium in a compressed and/or encrypted format.
Moreover, other implementations of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the teachings of the present application. Various aspects and/or components of the described example implementations may be used singly or in any combination. It is intended that the specification and example implementations be considered as examples only, with the true scope and spirit of the present application being indicated by the following claims.