Fuel and its usage during a vessel's operations represent a substantial portion of operational cost of a vehicle, e.g., a marine vehicle during a marine voyage. Emitted pollutants also impose a cost, and may be limited by law or regulation. As is intuitive, voyage time is dependent on average vessel speed over the distance of the trip, which is typically determined, in part, by averaged instantaneous fuel usage. Less intuitive is that trip fuel usage is influenced by total trip time in many cases. Thus, there may be a non-intuitive and non-linear relationship between a vessel's speed, its total trip time, and total trip cost. A balance between cost, emissions, and trip time is needed to optimize operations for changing trip priorities and the state of the vessel and the environment.
Edge computing is a distributed computing paradigm in which computation is largely or completely performed on distributed device nodes known as smart devices or edge devices as opposed to primarily taking place in a centralized cloud environment. The eponymous “edge” refers to the geographic distribution of computing nodes in the network as Internet of Things devices, which are at the “edge” of an enterprise, metropolitan or other network. The motivation is to provide server resources, data analysis and artificial intelligence (“ambient intelligence”) closer to data collection sources and cyber-physical systems such as smart sensors and actuators. Edge computing is seen as important in the realization of physical computing, smart cities, ubiquitous computing and the Internet of Things.
Edge computing is concerned with computation performed at the edge of networks, though typically also involves data collection and communication over networks.
Edge computing pushes applications, data and computing power (services) away from centralized points to the logical extremes of a network. Edge computing takes advantage of microservices architectures to allow some portion of applications to be moved to the edge of the network. While content delivery networks have moved fragments of information across distributed networks of servers and data stores, which may spread over a vast area, Edge Computing moves fragments of application logic out to the edge. As a technological paradigm, edge computing may be architecturally organized as peer-to-peer computing, autonomic (self-healing) computing, grid computing, and by other names implying non-centralized availability.
Edge computing is a method of optimizing applications or cloud computing systems by taking some portion of an application, its data, or services away from one or more central nodes (the “core”) to the other logical extreme (the “edge”) of the Internet which makes contact with the physical world or end users. In this architecture, according to one embodiment, specifically for Internet of things (IoT) devices, data comes in from the physical world via various sensors, and actions are taken to change physical state via various forms of output and actuators; by performing analytics and knowledge generation at the edge, communications bandwidth between systems under control and the central data center is reduced. Edge computing takes advantage of proximity to the physical items of interest and also exploits the relationships those items may have to each other. Another, broader way to define “edge computing” is to put any type of computer program that needs low latency nearer to the requests.
In some cases, edge computing requires leveraging resources that may not be continuously connected to a network such as autonomous vehicles, implanted medical devices, fields of highly distributed sensors, and mobile devices. Edge computing includes a wide range of technologies including wireless sensor networks, mobile data acquisition, mobile signature analysis, cooperative distributed peer-to-peer ad hoc networking and processing also classifiable as local cloud/fog computing and grid computing, dew computing, mobile edge computing, cloudlet, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented reality, the Internet of Things and more. Edge computing can involve edge nodes directly attached to physical inputs and output or edge clouds that may have such contact but at least exist outside of centralized clouds closer to the edge.
See:
U.S. patent and Published U.S. Pat. Nos. 10,007,513; 10,014,812; 10,034,066; 10,056,008; 10,075,834; 10,087,065; 10,089,370; 10,089,610; 10,091,276; 10,106,237; 10,111,272; 3,951,626; 3,960,012; 3,960,060; 3,972,224; 3,974,802; 4,165,795; 4,212,066; 4,240,381; 4,286,324; 4,303,377; 4,307,450; 4,333,548; 4,341,984; 4,354,144; 4,364,265; 4,436,482; 4,469,055; 4,661,714; 4,742,681; 4,777,866; 4,796,592; 4,854,274; 4,858,569; 4,939,898; 4,994,188; 5,076,229; 5,097,814; 5,165,373; 5,195,469; 5,259,344; 5,266,009; 5,474,036; 5,520,161; 5,632,144; 5,658,176; 5,679,035; 5,788,004; 5,832,897; 6,092,021; 6,213,089; 6,295,970; 6,319,168; 6,325,047; 6,359,421; 6,390,059; 6,418,365; 6,427,659; 6,497,223; 6,512,983; 6,520,144; 6,564,546; 6,588,258; 6,641,365; 6,732,706; 6,752,733; 6,804,997; 6,955,081; 6,973,792; 6,990,855; 7,013,863; 7,121,253; 7,143,580; 7,225,793; 7,325,532; 7,392,129; 7,460,958; 7,488,357; 7,542,842; 8,155,868; 8,196,686; 8,291,587; 8,384,397; 8,418,462; 8,442,729; 8,514,061; 8,534,401; 8,539,764; 8,608,620; 8,640,437; 8,955,474; 8,996,290; 9,260,838; 9,267,454; 9,371,629; 9,399,185; 9,424,521; 9,441,532; 9,512,794; 9,574,492; 9,586,805; 9,592,964; 9,637,111; 9,638,537; 9,674,880; 9,711,050; 9,764,732; 9,775,562; 9,790,080; 9,792,259; 9,792,575; 9,815,683; 9,819,296; 9,836,056; 9,882,987; 9,889,840; 9,904,264; 9,904,900; 9,906,381; 9,923,124; 9,932,220; 9,932,925; 9,946,262; 9,981,840; 9,984,134; 9,992,701; 20010015194; 20010032617; 20020055815; 20020144671; 20030139248; 20040011325; 20040134268; 20040155468; 20040159721; 20050039526; 20050169743; 20060086089; 20060107586; 20060118079; 20060118086; 20060155486; 20070073467; 20070142997; 20080034720; 20080047272; 20080306636; 20080306674; 20090017987; 20090320461; 20100018479; 20100018480; 20100101409; 20100138118; 20100206721; 20100313418; 20110088386; 20110148614; 20110282561; 20110283695; 20120022734; 20120191280; 20120221227; 20130125745; 20130151115; 20130160744; 20140007574; 20140039768; 20140041626; 20140165561; 20140290595; 20140336905; 20150046060; 20150169714; 20150233279; 20150293981; 20150339586; 20160016525; 20160107650; 20160108805; 20160117785; 20160159339; 20160159364; 20160160786; 20160196527; 20160201586; 20160216130; 20160217381; 20160269436; 20160288782; 20160334767; 20160337198; 20160337441; 20160349330; 20160357187; 20160357188; 20160357262; 20160358084; 20160358477; 20160362096; 20160364678; 20160364679; 20160364812; 20160364823; 20170018688; 20170022015; 20170034644; 20170037790; 20170046669; 20170051689; 20170060567; 20170060574; 20170142204; 20170151928; 20170159556; 20170176958; 20170177546; 20170184315; 20170185956; 20170198458; 20170200324; 20170208540; 20170211453; 20170214760; 20170234691; 20170238346; 20170260920; 20170262790; 20170262820; 20170269599; 20170272972; 20170279957; 20170286572; 20170287335; 20170318360; 20170323249; 20170328679; 20170328680; 20170328681; 20170328682; 20170328683; 20170344620; 20180005178; 20180017405; 20180020477; 20180023489; 20180025408; 20180025430; 20180032836; 20180038703; 20180047107; 20180054376; 20180073459; 20180075380; 20180091506; 20180095470; 20180097883; 20180099855; 20180099858; 20180099862; 20180099863; 20180099864; 20180101183; 20180101184; 20180108023; 20180108942; 20180121903; 20180122234; 20180122237; 20180137219; 20180158020; 20180171592; 20180176329; 20180176663; 20180176664; 20180183661; 20180188704; 20180188714; 20180188715; 20180189332; 20180189344; 20180189717; 20180195254; 20180197418; 20180202379; 20180210425; 20180210426; 20180210427; 20180215380; 20180218452; 20180229998; 20180230919; 20180253073; 20180253074; 20180253075; 20180255374; 20180255375; 20180255376; 20180255377; 20180255378; 20180255379; 20180255380; 20180255381; 20180255382; 20180255383; 20180262574; 20180270121; 20180274927; 20180279032; 20180284735; 20180284736; 20180284737; 20180284741; 20180284742; 20180284743; 20180284744; 20180284745; 20180284746; 20180284747; 20180284749; 20180284752; 20180284753; 20180284754; 20180284755; 20180284756; 20180284757; 20180284758; 20180288586; 20180288641; 20180290877; 20180293816; 20180299878; 20180300124; and 20180308371.
In order to predict, in real-time or near real-time (e.g., within 30, 10, 5, 1, 0.5, or 0.1 second(s)), the relationship between a vehicle's engine speed (rotations per minute, RPM) and its trip time and trip cost, a statistical model may be created to predict these complex relationships. The statistical model may also include geographic features and constraints, traffic and risk of delay, geopolitical risks, and the like. This is particularly useful for marine vessels.
Using some embodiments of the model and the methods and algorithms described herein, trip time and trip cost can be computed from predicted average vehicle speed and predicted average fuel flow rate, e.g., for every minute of a trip, for a known trip distance.
In a variance analysis of diesel engine data, engine fuel rate and vessel speed were found to have strong correlation with engine revolutions per minute (RPM) and engine load percentage (e.g., as represented by a “fuel index”) in a bounded range of engine RPM and when the engine was in steady state, i.e., engine RPM and engine load were stable.
Considering constant external factors (e.g., wind, current, ocean conditions, etc.) and for a given state of the vessel and engine inside a bounded region of engine RPM (e.g., above idle engine RPM), a function ƒ1 exists such that:
fuel rate=ƒ1(RPM,load)
where ƒ1: 2→m. In this case, n equals two (RPM and load) and m equals one (fuel rate). In other words, ƒ1 is a map that allows for prediction of a single dependent variable from two independent variables. Similarly, a function ƒ2 exists such that:
vessel speed=ƒ2(RPM,load)
where ƒ2: 2→m. In this case n equals two (RPM and load) and m equals one (vessel speed).
Grouping these two maps into one map leads to a multi-dimensional map (i.e., the model) such that f: 2→m where n equals two (RPM, load) and m equals two (fuel rate and vessel speed). Critically, many maps are grouped into a single map with the same input variables, enabling potentially many correlated variables (i.e., a tensor of variables) to be predicted within a bounded range. Note that the specific independent variables need not be engine RPM and engine load and need not be limited to two variables. For example, engine operating hours can be added as an independent variable in the map to account for engine degradation with operating time.
Vessel speed is also affected by factors in addition to engine RPM and engine load, such as: water speed and/or direction, wind speed and/or direction, propeller pitch, weight and drag of a towed load, weight of on-board fuel, marine growth on the vessel's hull, etc. Many of these factors are impractical or expensive to measure in real-time. Their effects are not known as mathematical functions, and so a direct measurement of those external variables is not necessarily effective for real-time prediction of speed, fuel usage, and/or emissions estimates at different RPMs and/or engine loads.
In some embodiments, an edge computing device is installed on a vessel that interfaces with all the diesel engines' electronic control units/modules (ECUs/ECMs) and collects engine sensor data as a time series (e.g., all engines' RPMs, load percentages, fuel rates, etc.) as well as vessel speed and location data from an internal GPS/DGPS or vessel's GPS/DGPS. For example, the edge device collects all of these sensor data at an approximate rate of sixty samples per minute and align the data to every second's time-stamp (e.g., 12:00:00, 12:00:01, 12:00:02, . . . ). If data can be recorded at higher frequency, the average may be calculated for each second. Then the average value (i.e., arithmetical mean) for each minute is calculated, creating the minute's averaged time series (e.g., 12:00:00, 12:01:00, 12:02:00, . . . ). Minute's average data were found to be more stable for developing statistical models and predicting anomalies than raw, high-frequency samples. In some embodiments, data smoothing methods other than per-minute averaging are used.
For vessels with multiple engines, the model may assume that all engines are operating at the same RPM with small variations and that the average of all engine RPM is used as the RPM input to the model and, similarly, the average of all engines' loads are used as the load input to the model. Of course, this is not a limitation, and more complex models may be implemented. Some parameter inputs to the model may be a summation instead of an average. For example, the fuel rate parameter can be the sum of all engines' fuel rates as opposed to the average.
The present technology provides an on-demand and near real-time method for predicting trip time and trip cost at different engine RPM at the current engine load, while accounting for the effects of the previously described unknown factors (without necessarily including their direct measurement). The combined effect of the unknown factors may be assumed to remain constant for varying vessel speeds at the given point in space and time. On the other hand, where sufficient data are available, more complex estimators may be employed for the unknown factors.
A point in space is defined as a latitude and longitude for marine vessels, though it may include elevation for airplanes. The model may continuously or periodically update the predicted relationship between input engine parameters and the resulting trip cost, time, and emissions as operating conditions (e.g., vessel load, water and weather conditions, etc.) change over time. These predictions can be coupled with trip distance information and dependent parameter constrains (e.g., cost, time, and/or emissions limits) to predict a range of engine RPM (or load or fuel index) over which those constraints are satisfied over the course of a trip. Such predictions allow vessel operators to make informed decisions and minimize fuel usage, overall costs, and/or emissions.
For example, in cases where trip time is the priority, such predictions allow a vessel to reach its destination on time, but with minimal fuel usage. When voyage duration is less important, such as when waiting for inclement weather, fuel usage can be minimized while maintaining a safe vessel operating speed.
A general explanation of the model is as follows: models that characterize the relationships between engine RPM, engine load, and engine fuel flow rate as well as engine RPM, engine load, and vessel speed are created using machine learning on training data collected in an environment where the effects of non-engine factors are minimized or may be minimized algorithmically. In some embodiments, the programming language ‘It’ is used as an environment for statistical computing, model generation, and graphics. In order to create a calibration curve, training data may be collected in the following manner: in an area with minimal environmental factors (e.g., a calm harbor), navigate a vessel between two points, A and B. While navigating from A to B, slowly and gradually increase engine RPM from idle to maximum RPM and gradually decrease from maximum RPM to idle. Perform the same idle to maximum to idle RPM sweep when returning from point B to A. By averaging this training data, the contribution to vessel speed by any potential environmental factors can be further minimized from the training set. A mobile phone application or vessel-based user interface can help to validate that the required calibration data has been collected successfully. If this calibration curve were created just prior to a vessel's voyage, it would provide data that reflect the current operating conditions of the vessel (weight of on-board fuel and cargo or marine growth on the vessel's hull, for example) and can lead to more accurate predictions by the models in many cases. In other implementations, the model can be updated to include additional data points as the system collects data during a voyage. In addition, the model can be created using data collected from previous trips made by the vessel, which may prove useful in operating conditions where vessel cargo or vessel load fluctuate over a voyage.
During a voyage, near real-time engine RPM and engine load (from an ECM) and actual vessel speed (from a GPS) are logged by the edge device. Vessel speed and engine fuel flow rate are predicted using the generated statistical models. The difference between predicted vessel speed over ground and measured vessel speed over ground as determined by GPS or other devices is also computed in near real-time at the same time stamp.
In some embodiments, this difference (i.e., the error) between predicted and measured vessel speed is the summation of three error components: irreducible error, model bias error, and variance error.
Model bias error can be minimized using a low bias machine learning model (e.g., multivariate adaptive regression splines, Neural network, support vector machine (SVM), generalized additive model (GAM), etc.). GAM is further discussed below.
Thus for high error values (e.g., error values greater than 1 standard deviations from the mean error, which is near to zero) the majority of the error is expected to be made up of variance error, which is caused by the combined effects of all the unknown factors acting on the vessel and not accounted for in the model. The predicted vessel speeds are then corrected by adding the calculated error (i.e., the difference between the predicted and measured vessel speed) to the predicted speed at all RPM for the measured load. Note that the error may be negative.
With a model for the vessel speed at each RPM and the total trip distance, the expected trip time for each RPM can be calculated. Then, by multiplying the predicted trip time by the total fuel flow rate, the predicted total fuel usage for each RPM may be determined. Thus, models for RPM versus total trip time and RPM versus total trip fuel usage at the measured engine load may be generated. These two models can be grouped into a single model that will be referred to as the ‘trip model’. This combined model is updated at near real-time and for each successive data point as the trip distance is updated and/or as the difference between the predicted and measured speed changes. Predictions from the trip model can be further constrained by a safe speed range, trip cost limit, trip time limit, and/or trip emissions, for example.
If the real-time water speed and current direction are available, and water speed in the direction of the vessel's motion can be calculated, then the component of water speed in the direction of the vessel's motion can be subtracted from the speed error and the model can be updated with that refined error. In that case, knowing the forecast water speeds (e.g., tide timing and speed) or wind speeds and directions ahead of time can be useful for trip optimization. In some embodiments of model generation, water current and wind speed and direction data can be included in the model to predict vessel speed.
Additionally, the problems and algorithms discussed herein are equally applicable to airplanes moving through varying wind streams with varying cargo loads. Thus the analysis of speed and trip cost based on a set of engine parameters need not be limited to marine vessels and may be applied to any vehicle or vessel as needed and as feasible.
Various predictive modeling methods are known, including Group method of data handling; Naive Bayes; k-nearest neighbor algorithm; Majority classifier; Support vector machines; Random forests; Boosted trees; CART (Classification and Regression Trees); Multivariate adaptive regression splines (MARS); Neural Networks and deep neural networks; ACE and AVAS; Ordinary Least Squares; Generalized Linear Models (GLM) (The generalized linear model (GLM) is a flexible family of models that are unified under a single method. Logistic regression is a notable special case of GLM. Other types of GLM include Poisson regression, gamma regression, and multinomial regression); Logistic regression (Logistic regression is a technique in which unknown values of a discrete variable are predicted based on known values of one or more continuous and/or discrete variables. Logistic regression differs from ordinary least squares (OLS) regression in that the dependent variable is binary in nature. This procedure has many applications); Generalized additive models; Robust regression; and Semiparametric regression.
In statistics, the generalized linear model (GLM) is a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value. Generalized linear models unify various other statistical models, including linear regression, logistic regression and Poisson regression, and employs an iteratively reweighted least squares method for maximum likelihood estimation of the model parameters.
Ordinary linear regression predicts the expected value of a given unknown quantity (the response variable, a random variable) as a linear combination of a set of observed values (predictors). This implies that a constant change in a predictor leads to a constant change in the response variable (i.e., a linear-response model). This is appropriate when the response variable has a normal distribution (intuitively, when a response variable can vary essentially indefinitely in either direction with no fixed “zero value”, or more generally for any quantity that only varies by a relatively small amount, e.g., human heights). However, these assumptions are inappropriate for some types of response variables. For example, in cases where the response variable is expected to be always positive and varying over a wide range, constant input changes lead to geometrically varying, rather than constantly varying, output changes.
In a generalized linear model (GLM), each outcome Y of the dependent variables is assumed to be generated from a particular distribution in the exponential family, a large range of probability distributions that includes the normal, binomial, Poisson and gamma distributions, among others.
The GLM consists of three elements: A probability distribution from the exponential family; a linear predictor η=Xβ; and a link function g such that E(Y)=μ=g−1 (η). The linear predictor is the quantity which incorporates the information about the independent variables into the model. The symbol η (Greek “eta”) denotes a linear predictor. It is related to the expected value of the data through the link function. η is expressed as linear combinations (thus, “linear”) of unknown parameters β. The coefficients of the linear combination are represented as the matrix of independent variables X. η can thus be expressed as The link function provides the relationship between the linear predictor and the mean of the distribution function. There are many commonly used link functions, and their choice is informed by several considerations. There is always a well-defined canonical link function which is derived from the exponential of the response's density function. However, in some cases it makes sense to try to match the domain of the link function to the range of the distribution function's mean, or use a non-canonical link function for algorithmic purposes, for example Bayesian probit regression. For the most common distributions, the mean is one of the parameters in the standard form of the distribution's density function, and then is the function as defined above that maps the density function into its canonical form. A simple, very important example of a generalized linear model (also an example of a general linear model) is linear regression. In linear regression, the use of the least-squares estimator is justified by the Gauss-Markov theorem, which does not assume that the distribution is normal.
The standard GLM assumes that the observations are uncorrelated. Extensions have been developed to allow for correlation between observations, as occurs for example in longitudinal studies and clustered designs. Generalized estimating equations (GEEs) allow for the correlation between observations without the use of an explicit probability model for the origin of the correlations, so there is no explicit likelihood. They are suitable when the random effects and their variances are not of inherent interest, as they allow for the correlation without explaining its origin. The focus is on estimating the average response over the population (“population-averaged” effects) rather than the regression parameters that would enable prediction of the effect of changing one or more components of X on a given individual. GEEs are usually used in conjunction with Huber-White standard errors. Generalized linear mixed models (GLMMs) are an extension to GLMs that includes random effects in the linear predictor, giving an explicit probability model that explains the origin of the correlations. The resulting “subject-specific” parameter estimates are suitable when the focus is on estimating the effect of changing one or more components of X on a given individual. GLMMs are also referred to as multilevel models and as mixed model. In general, fitting GLMMs is more computationally complex and intensive than fitting GEEs.
In statistics, a generalized additive model (GAM) is a generalized linear model in which the linear predictor depends linearly on unknown smooth functions of some predictor variables, and interest focuses on inference about these smooth functions. GAMs were originally developed by Trevor Hastie and Robert Tibshirani to blend properties of generalized linear models with additive models.
The model relates a univariate response variable, to some predictor variables. An exponential family distribution is specified for (for example normal, binomial or Poisson distributions) along with a link function g (for example the identity or log functions) relating the expected value of univariate response variable to the predictor variables.
The functions may have a specified parametric form (for example a polynomial, or an un-penalized regression spline of a variable) or may be specified non-parametrically, or semi-parametrically, simply as ‘smooth functions’, to be estimated by non-parametric means. So a typical GAM might use a scatterplot smoothing function, such as a locally weighted mean. This flexibility to allow non-parametric fits with relaxed assumptions on the actual relationship between response and predictor, provides the potential for better fits to data than purely parametric models, but arguably with some loss of interpretability.
Any multivariate function can be represented as sums and compositions of univariate functions. Unfortunately, though the Kolmogorov-Arnold representation theorem asserts the existence of a function of this form, it gives no mechanism whereby one could be constructed. Certain constructive proofs exist, but they tend to require highly complicated (i.e., fractal) functions, and thus are not suitable for modeling approaches. It is not clear that any step-wise (i.e., backfitting algorithm) approach could even approximate a solution. Therefore, the Generalized Additive Model drops the outer sum, and demands instead that the function belong to a simpler class.
The original GAM fitting method estimated the smooth components of the model using non-parametric smoothers (for example smoothing splines or local linear regression smoothers) via the backfitting algorithm. Backfitting works by iterative smoothing of partial residuals and provides a very general modular estimation method capable of using a wide variety of smoothing methods to estimate the terms. Many modern implementations of GAMs and their extensions are built around the reduced rank smoothing approach, because it allows well founded estimation of the smoothness of the component smooths at comparatively modest computational cost, and also facilitates implementation of a number of model extensions in a way that is more difficult with other methods. At its simplest the idea is to replace the unknown smooth functions in the model with basis expansions. Smoothing bias complicates interval estimation for these models, and the simplest approach turns out to involve a Bayesian approach. Understanding this Bayesian view of smoothing also helps to understand the REML and full Bayes approaches to smoothing parameter estimation. At some level smoothing penalties are imposed.
Overfitting can be a problem with GAMs, especially if there is un-modelled residual auto-correlation or un-modelled overdispersion. Cross-validation can be used to detect and/or reduce overfitting problems with GAMs (or other statistical methods), and software often allows the level of penalization to be increased to force smoother fits. Estimating very large numbers of smoothing parameters is also likely to be statistically challenging, and there are known tendencies for prediction error criteria (GCV, AIC, etc.) to occasionally undersmooth substantially, particularly at moderate sample sizes, with REML being somewhat less problematic in this regard. Where appropriate, simpler models such as GLMs may be preferable to GAMs unless GAMs improve predictive ability substantially (in validation sets) for the application in question.
It is therefore an object to provide a method for producing a real-time output based on at least one constraint and a relationship between a vehicle's engine speed, vehicle speed, fuel consumption rate, and indirectly measured operating conditions, comprising: monitoring vehicle speed and fuel consumption rate of the vehicle over an engine speed range of at least one engine of the vehicle; generating a predictive model relating the vehicle's engine speed, vehicle speed, and fuel consumption rate, based on the monitoring; and receiving at least one constraint on at least one of a trip time, trip fuel consumption, vehicle speed, fuel consumption rate, and estimated pollutant emissions; and automatically producing from at least one automated processor, based on the predictive model, and the received at least one constraint, an output constraint, e.g., real-time output comprising a constraint on vehicle operation.
It is also an object to provide a vehicle control system, comprising: a monitor for determining at least a vehicle speed and a fuel consumption rate of the vehicle over an engine speed range of at least one engine of the vehicle; a predictive model relating the vehicle's engine speed, vehicle speed, fuel consumption rate, operating cost, and pollution emissions, generated based on the monitoring; and at least one automated processor configured to automatically produce, based on the predictive model, an output constraint, e.g., a proposed engine speed dependent at least one constraint representing at least one of a trip time, trip fuel consumption, vehicle speed, fuel consumption rate, and estimated emissions.
It is a further object to provide a control system for a vehicle, comprising: a first input configured to receive information for monitoring at least a vehicle speed and a fuel consumption rate of the vehicle over an engine speed range of at least one engine of the vehicle; a second input configured to receive at least one constraint on at least one of a trip time, trip fuel consumption, vehicle speed, fuel consumption rate, and estimated emissions; a predictive model relating the vehicle's engine speed, vehicle speed, and fuel consumption rate, generated based on the monitoring; and at least one automated processor configured to automatically produce, based on the predictive model, and the received at least one constraint, an output constraint, e.g., an engine speed constraint.
The method may further comprise: monitoring the engine speed during said monitoring, and generating the predictive model further based on the monitored engine speed; monitoring the engine load percentage during said monitoring, and generating the predictive model further based on the monitored engine load; monitoring at least one of wind and water current speed along an axis of motion of the vehicle during said monitoring, and generating the predictive model further based on the monitoring of at least one of present-time or forecast wind and water current velocity vectors; and/or monitoring a propeller pitch during said monitoring, and generating the predictive model further based on the monitored propeller pitch.
The method may further comprise determining a failure of the predictive model; regenerating the predictive model based on newly-acquired data; annotating monitored vehicle speed and fuel consumption rate of the vehicle based on vehicle operating conditions; adaptively updating the predictive model; determining an error between predicted fuel flow rate and actual fuel flow rate; filtering data representing the vehicle's engine speed, vehicle speed, and fuel consumption rate for anomalies before generating the predictive model; and/or tagging data representing the vehicle's engine speed, vehicle speed, and fuel consumption rate with context information.
The predictive model may comprise a generalized additive model, a neural network, and/or a support vector machine, for example.
The received constraint may comprise a trip time, a trip fuel consumption, a vehicle speed, a fuel consumption rate, an estimate of emissions, a cost optimization, and/or an economic optimization of at least fuel cost and time cost.
The predictive model may model a fuel consumption with respect to engine speed and load.
The output constraint may be adaptive with respect to an external condition and/or location.
The vehicle may be a marine vessel, railroad locomotive, automobile, aircraft, or unmanned aerial vehicle, for example.
The control system may further comprise an output configured to control an engine of the vehicle according to the engine speed constraint.
The at least one automated processor may be further configured to generate the predictive model.
The engine speed may be monitored during said monitoring, and the predictive model further generated based on the monitored engine speed.
The engine load percentage may be monitored during said monitoring, and the predictive model may be further generated based on the monitored engine load.
The control system may further comprise an input configured to receive at least one of wind and water current speed along an axis of motion of the vehicle, and the predictive model further generated based on the monitored wind and water current speed along an axis of motion of the vehicle.
The control system may further comprise another input configured to monitor a propeller pitch during said monitoring, and the predictive model is further generated based on the monitored propeller pitch.
The automated processor may be further configured to do at least one of: determine a failure of the predictive model; regenerate the predictive model based on newly-acquired data; annotate monitored vehicle speed and fuel consumption rate of the vehicle based on vehicle operating conditions; adaptively update the predictive model; determine an error between predicted fuel flow rate and actual fuel flow rate; and filter data representing the vehicle's engine speed, vehicle speed, and fuel consumption rate for anomalies before the predictive model is generated.
The predictive model may be formulated using data representing the vehicle's engine speed, vehicle speed, and fuel consumption rate tagged with context information. The predictive model may comprise a generalized additive model, a neural network, and/or a support vector machine.
The received constraint may comprise at least one of a trip time, a trip fuel consumption, a vehicle speed, a fuel consumption rate, an estimate of emissions, a cost optimization, an economic optimization of at least fuel cost and time cost, and a fuel consumption with respect to engine speed.
The output constraint may be adaptive with respect to an external condition and/or location.
The vehicle may be a marine vessel, a railroad locomotive, an automobile, an aircraft, or an unmanned aerial vehicle.
The output constraint may comprise a real-time output comprising a constraint on vehicle operation; an engine speed constraint; a propeller pitch constraint; a combination of engine speed and propeller pitch; and/or a combination of monitored inputs.
One application for this technology is the use of the system to predict vessel planing speed for vessels with planing hull for different loads and conditions. Boats with planing hulls are designed to rise up and glide on top of the water when enough power is supplied, which is the most fuel efficient operating mode. These boats may operate like displacement hulls when at rest or at slow speeds but climb towards the surface of the water as they move faster.
Another application would be to provide fuel savings, by automatically sending control inputs to a smart governor module or device, to set optimum RPM for the trip considering trip constraints. Trip constraints can be a combination of trip time, trip cost, trip emission, minimal trip emissions at particular geospatial regions, etc.
In accordance with some embodiments, a machine learning (ML) generated model's fuel flow rate prediction and vessel speed prediction considering no error in measured speed at no error/drag are shown in
With a known model for RPM and fuel usage, an RPM-to-emissions model may be generated and used to predict emissions over the course of a trip. Since measured or predicted fuel flow rate is available, the emissions estimation procedure recommended by the United States Environmental Protection Agency may be used and is recreated herein. See, www3.epa.gov/ttnchiel/conference/ei19/session10/trozzi.pdf. The total trip emissions, Etrip, are the sum of the emissions during the three phases of a trip:
Etrip=Ehotelling+Emaneuvering+Ecrusing
where hoteling is time spent at dock or in port, maneuvering is time spent approaching a harbor, and cruising is time spent traveling in open water. These phases may be determined by port coordinates, “geo-fencing”, human input, and/or additional programmatic approaches. For each phase of the trip and each pollutant, the Etrip, is
where
Etrip=total trip emissions [tons]
FC=fuel consumption [tons]
EF=emission factor [kg/ton]
i=pollutant
j=engine type [slow, medium, high-speed diesel, gas turbine, steam turbine]
m=fuel type [bunker fuel oil, marine diesel, gasoline]
p=trip phase [hoteling, maneuvering, cruising]
Since the constant in the equation (EF,i,j, m) are known explicitly for a given vessel and the variables (FC, p) can be predicted or measured using data from a locally-deployed sensing device, emissions estimates for a given vessel may be made. Additionally, with the use of GPS data, real-time, geo-spatially referenced emissions may be estimated.
In some embodiments, the difference between predicted speed and measured speed is assumed to be constant for all possible vessel speeds at the analyzed point in space and time. Essentially, if the difference of speed is caused by external factors such as water speed and wind speed, then this difference will be applied equally across a range of variation in vessel parameters (e.g., engine RPM between 1000 and 2000, load between 50 and 100 percent, speed between 50 and 100 percent of a vessel's maximum speed, etc.). Typically, the speed difference won't be affected much by vessel parameters (e.g., RPM, load, speed), so the assumption holds. Some component(s) of the speed error can change with the hydrodynamics and aerodynamics of the vessel and towed load but for non-planing hulls (e.g., tugboats, fishing boats, etc.) those effects would typically cause minimal errors as the vessel's planing hydrodynamic and aerodynamic characteristics (for both planing and non-planing hulls) are already accounted for in the model and a standard load's hydrodynamics typically does not change substantially within practical towing speed limits.
As shown in
A first model will be generated as described above to predict speed over ground for a vehicle considering vessel or engine parameters. A second model referred to as a “trip model” will be created that predicts the optimal operating range for a vehicle. The trip model will incorporate trip distance, any trip configurations input by user (fuel cost, fixed costs, hourly costs, etc), any trip constraints provided by user (maximum cost, maximum emissions, maximum time, etc.) to generate output constraints. These output constraints will be used to recommend a range of optimal operating conditions to a user when the user's trip constraints (maximum cost, maximum emissions, maximum time, etc.) can be satisfied.
Here,
With reference to
These constraints are set by the operator (some can be automatically populated based on available data) in section A (“Configure Trip”). Section B (“Prediction Outcome of Optimization Strategy”) shows a graph including a previously charted route 1204 between the vessel's current location and the trip's destination, as well as pop-up information 1206 and 1208 comparing the vessel's current operation with the trip model-optimized solution. Section B also includes a “Proceed with Optimization” button 1220, which when clicked on (or otherwise actuated) causes the vessel to operate under the algorithm-optimized solution. Section C (“Predicted Results of Optimized Fuel Index”) shows multiple charts 1210s to illustrate the mathematical relationship between engine load (as represented by “fuel index”) and vessel speed, remaining cost, remaining time, and remaining fuel usage. In these charts, the vessel's current operation is compared with the optimal range computed using the trip model. Section D (“Goal Status”) shows whether each goal set in section A can be satisfied based on the trip model prediction, with color-coded highlighting (e.g., green to indicate a goal can be met, and red to indicate a goal cannot be met).
As shown in
With reference to
In various embodiments, section A can include various ways (e.g., sliding bar, drop-down menu, or the like) that enable the operator to input and/or change trip goal(s) and/or trip cost(s). The other sections (e.g., section B, section C, and section D) can update their content in real-time or near real-time in accordance with computation using the trip model-based on change(s) made in section A.
With reference to
With reference to
In some embodiments, the trip model is updated in real-time or near real-time to reflect a result based on sensor data or other relevant information as they are collected. The GUI content can be updated at the same or a slower rate as the trip is updated. In some embodiments, the trip model is only updated (and the GUI content correspondingly updated) when a predicted change (e.g., vessel speed) is above or below a predefined or automatically generated threshold. With the updates, the vessel operator can be properly alerted to unexpected situations and take further actions.
A computing device (e.g. an edge device, some embodiments of which described in U.S. application Ser. No. 15/703,487 filed Sep. 13, 2017) that implements various embodiments (or portions thereof) of the presently disclosed technology may be constructed as follows. A controller may include any or any combination of an a system-on-chip, or commercially available embedded processor, Arduino, MeOS, MicroPython, Raspberry Pi, or other type processor board. The device may also include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a programmable combinatorial circuit (e.g., FPGA), a processor (shared, dedicated, or group) or memory (shared, dedicated, or group) that may execute one or more software or firmware programs, or other suitable components that provide the described functionality.
In embodiments, one or more of vehicle sensors to determine, sense, and/or provide to controller data regarding one or more other vehicle characteristics may be and/or include Internet of Things (“IoT”) devices. IoT devices may be objects or “things”, each of which may be embedded with hardware or software that may enable connectivity to a network, typically to provide information to a system, such as controller. Because the IoT devices are enabled to communicate over a network, the IoT devices may exchange event-based data with service providers or systems in order to enhance or complement the services that may be provided. These IoT devices are typically able to transmit data autonomously or with little to no user intervention. In embodiments, a connection may accommodate vehicle sensors as IoT devices and may include IoT-compatible connectivity, which may include any or all of WiFi, LoRan, 900 MHz Wifi, BlueTooth, low-energy BlueTooth, USB, UWB, etc. Wired connections, such as Ethernet 1000baseT, CANBus, USB 3.0, USB 3.1, etc., may be employed.
Embodiments may be implemented into a system using any suitable hardware and/or software to configure as desired. The computing device may house a board such as motherboard which may include a number of components, including but not limited to a processor and at least one communication interface device. The processor may include one or more processor cores physically and electrically coupled to the motherboard. The at least one communication interface device may also be physically and electrically coupled to the motherboard. In further implementations, the communication interface device may be part of the processor. In embodiments, processor may include a hardware accelerator (e.g., FPGA).
Depending on its applications, computing device may include other components which include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), and flash memory. In embodiments, flash and/or ROM may include executable programming instructions configured to implement the algorithms, operating system, applications, user interface, etc.
In embodiments, computing device may further include an analog-to-digital converter, a digital-to-analog converter, a programmable gain amplifier, a sample-and-hold amplifier, a data acquisition subsystem, a pulse width modulator input, a pulse width modulator output, a graphics processor, a digital signal processor, a crypto processor, a chipset, a cellular radio, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device or subsystem, a compass (magnetometer), an accelerometer, a barometer (manometer), a gyroscope, a speaker, a camera, a mass storage device (such as a SIM card interface, and SD memory or micro-SD memory interface, SATA interface, hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth), a microphone, a filter, an oscillator, a pressure sensor, and/or an RFID chip.
The communication network interface device may enable wireless communications for the transfer of data to and from the computing device. The term “wireless” and its derivatives may be used to describe circuits, devices, systems, processes, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 406 may implement any of a number of wireless standards or protocols, including but not limited to Institute for Electrical and Electronic Engineers (IEEE) standards including Wi-Fi (IEEE 802.11 family), IEEE 802.16 standards (e.g., IEEE 802.16-2005 Amendment), Long-Term Evolution (LTE) project along with any amendments, updates, and/or revisions (e.g., advanced LTE project, ultra mobile broadband (UMB) project (also referred to as “3GPP2”), etc.). IEEE 802.16 compatible BWA networks are generally referred to as WiMAX networks, an acronym that stands for Worldwide Interoperability for Microwave Access, which is a certification mark for products that pass conformity and interoperability tests for the IEEE 802.16 standards. The communication chip 406 may operate in accordance with a Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E-HSPA), or LTE network. The communication chip 406 may operate in accordance with Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN), or Evolved UTRAN (E-UTRAN). The communication chip 406 may operate in accordance with Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Evolution-Data Optimized (EV-DO), derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The communication chip may operate in accordance with other wireless protocols in other embodiments. The computing device may include a plurality of communication chips. For instance, a first communication chip may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.
The processor of the computing device may include a die in a package assembly. The term “processor” may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.
Although certain embodiments have been illustrated and described herein for purposes of description, a wide variety of alternate and/or equivalent embodiments or implementations calculated to achieve the same purposes may be substituted for the embodiments shown and described without departing from the scope of the present disclosure. The various embodiments and optional features recited herein may be employed in any combination, sub-combination, or permutation, consistent with the discussions herein. This application is intended to cover any adaptations or variations of the embodiments discussed herein, limited only by the claims.
In accordance with some embodiments, the presently disclosed technology implements one or more algorithms selected from the following:
Algorithm 1: Create a statistical model of speed vs. RPM and load, create a statistical model for fuel flow vs RPM and load using machine learning
Data: engine data time series containing time-stamp, engine RPM, load, and fuel flow from the engine's Electronic Control Module, speed, latitude, and longitude time series data from a GPS unit, which are time synchronized for the training period.
Result: engine's average RPM, average load, total fuel flow, vessel speed model using machine learning
initialization;
step 2: create average engine RPM from multiple engine RPMs, average engine load from multiple engine loads, average engine fuel flow rate from multiple engine fuel flow rates;
step 3: define a predictable range for RPM (e.g., RPM greater than idle range);
step 4: create a new Boolean column called isStable that can store true/false for predictors' combined stability;
step 5: compute isStable and store the values as a part of the time series (e.g., isStable=true if within last n minutes, the change in predictor variables (RPM, load) are within k standard deviation, else isStable=false);
if predictor variables are within predictable range and isStable=true for some predetermined time then
if all the engines are in forward propulsion mode and RPMs are almost equal (e.g., all engine RPMs are within 5% of mean RPM) then
step 6: include the record for model creation;
else
step 7: exclude the record from model creation;
end
else
step 8: exclude the record from model creation;
end
step 9: create a statistical model of speed vs. RPM and load using machine learning;
step 10: create a statistical model of fuel flow vs. RPM and load using machine learning;
step 11: test different model building methods in order to reduce or eliminate model bias (e.g., splines, support vector machines, neural networks);
step 12: choose the best fit model for the training data;
step 13: combine the two models to create one model that has engine's average RPM, average load, total fuel flow, and vessel speed;
Algorithm 2: Convert statistical model to a look-up table
Data: Statistical model from Algorithm 1 Result: Model look-up table initialization;
if model creation is successful then
create the model look-up table with n+m columns considering the model represents ƒ: Rn 1→Rm;
e.g., a lookup table for engine RPM 0-2000 and load 0-100 will have 200,000+1 rows assuming an interval of 1 for each independent variable. The model will have 2+2=4 columns assuming independent variables of engine RPM and load and dependent variables of fuel flow and vessel speed. For each engine RPM and load, the statistical model is used to predict the values of the dependent parameters and those predicted values are then stored in the look-up table;
e.g., a lookup table for a bounded region may be between engine RPM 1000-2000 and load 40-100 will have 60,000+1 rows assuming an interval of 1 for each independent variable;
else
No operation
end
Algorithm 3: Create error statistics for the engine parameter of interest during training period
Data: Statistical model and training data Result: error statistics initialization;
if model creation is successful then
use the model from Algorithm 1 or look-up table from Algorithm 2 to predict the time series of interest;
calculate the difference between actual value and predicted value;
create error time series;
else
Error Message;
end
calculate error mean and error standard deviation;
Algorithm 4: Filter engine RPM values to a range satisfying the given constraints
Data: updated model that reflects current conditions, constraints, e.g., current load, speed range, trip time limit, trip fuel cost limit, emissions limit, etc.
Result: optimum range of RPMs and trip time and trip cost for each RPM initialization;
at run time:
step 1: Apply the constraints and filter RPM ranges that satisfies the constraints;
step 2: output filtered RPM and associated fuel flow and speed data;
Algorithm 5: System algorithm
Data: engine data training and near real-time test data, k (the standardized error threshold), trip distance, trip time constraint if applicable
Result: updated model that reflects current conditions initialization;
at design time:
step 1: Use Algorithm 1 to create engine speed vs. RPM and load model and fuel flow vs. RPM and load from training data;
step 2: Use Algorithm 3 to create error statistics;
step 3: optionally, use Algorithm 2 to create model look-up table;
step 4: deploy the model on edge device and/or cloud database; at runtime:
while engine data is available and predictors are within range and engine is in steady state do
if model deployment is successful then
step 5: compute and save z error score(s) of current speed data using Algorithm 3;
if z score is greater than k then
step 6: Re-generate new speed vs RPM model assuming the error is constant;
step 7: Calculate the trip time vs RPM model using the new speed/RPM model from step 6;
else
step 7: Calculate trip time vs RPM model using the previous training speed/RPM model;
end
step 8: compute and save z error score(s) of current fuel flow data using Algorithm 3;
if z score is greater than k then
step 9: Re-generate the fuel flow rate vs. RPM curve;
step 10: Re-generate the trip fuel usage vs. RPM model using the predicted trip time from step 7 above
else
step 10: Calculate the trip fuel usage vs RPM model using the previous training fuel-flow/RPM model;
end
step 11: Use Algorithm 4 to calculate optimal engine RPM and associated trip time and trip cost information;
else end
step 10: nop;
end
The various embodiments described above can be combined to provide further embodiments. All of the U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in the present application and/or listed in the Application Data Sheet are incorporated herein by reference, in their entirety. Aspects of the embodiments can be modified, if necessary to employ concepts of the various patents, applications and publications to provide yet further embodiments. In cases where any document incorporated by reference conflicts with the present application, the present application controls.
These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
This application is a continuation application of co-pending U.S. patent application Ser. No. 16/678,991, filed Nov. 18, 2019, which claims priority to U.S. Provisional Application No. 62/758,385, filed Nov. 9, 2018 and entitled “USE OF MACHINE LEARNING FOR PREDICTION, PLANNING, AND OPTIMIZATION OF TIME, FUEL COST, AND/OR POLLUTANT EMISSIONS,” the contents of each of which are hereby incorporated by reference in their entirety. In cases where the present application conflicts with a document incorporated by reference, the present application controls.
Number | Name | Date | Kind |
---|---|---|---|
3951626 | Carey | Apr 1976 | A |
3960012 | Ingram | Jun 1976 | A |
3960060 | Eickmann | Jun 1976 | A |
3972224 | Ingram | Aug 1976 | A |
3974802 | Lundquist | Aug 1976 | A |
4165795 | Lynch et al. | Aug 1979 | A |
4212066 | Carp et al. | Jul 1980 | A |
4240381 | Lowther | Dec 1980 | A |
4286324 | Ingram | Aug 1981 | A |
4303377 | Schwartzman | Dec 1981 | A |
4307450 | Carp et al. | Dec 1981 | A |
4333548 | Jones | Jun 1982 | A |
4341984 | Parker et al. | Jul 1982 | A |
4354144 | McCarthy | Oct 1982 | A |
4364265 | Dickson | Dec 1982 | A |
4436482 | Inoue et al. | Mar 1984 | A |
4469055 | Caswell | Sep 1984 | A |
4661714 | Satterthwaite et al. | Apr 1987 | A |
4742681 | Haberkern et al. | May 1988 | A |
4777866 | Tan | Oct 1988 | A |
4796592 | Höfer et al. | Jan 1989 | A |
4854274 | Dingess | Aug 1989 | A |
4858569 | Cser et al. | Aug 1989 | A |
4939898 | Ichimura et al. | Jul 1990 | A |
4994188 | Prince | Feb 1991 | A |
5076229 | Stanley | Dec 1991 | A |
5097814 | Smith | Mar 1992 | A |
5165373 | Cheng | Nov 1992 | A |
5195469 | Syed | Mar 1993 | A |
5259344 | Huang et al. | Nov 1993 | A |
5266009 | Tasaki et al. | Nov 1993 | A |
5474036 | Hansen et al. | Dec 1995 | A |
5520161 | Klopp | May 1996 | A |
5632144 | Isobe | May 1997 | A |
5658176 | Jordan | Aug 1997 | A |
5679035 | Jordan | Oct 1997 | A |
5788004 | Friedmann et al. | Aug 1998 | A |
5832897 | Zhang | Nov 1998 | A |
6092021 | Ehlbeck et al. | Jul 2000 | A |
6213089 | Cheng | Apr 2001 | B1 |
6295970 | Kawakami | Oct 2001 | B1 |
6319168 | Morris et al. | Nov 2001 | B1 |
6325047 | Kawakami | Dec 2001 | B2 |
6359421 | Mueller et al. | Mar 2002 | B1 |
6390059 | Shiraishi et al. | May 2002 | B1 |
6418365 | Löffler et al. | Jul 2002 | B1 |
6427659 | Shiraishi et al. | Aug 2002 | B2 |
6497223 | Tuken et al. | Dec 2002 | B1 |
6512983 | Bauer et al. | Jan 2003 | B1 |
6520144 | Shiraishi et al. | Feb 2003 | B2 |
6564546 | Brown | May 2003 | B2 |
6588258 | Ju | Jul 2003 | B2 |
6641365 | Karem | Nov 2003 | B2 |
6732706 | Shiraishi et al. | May 2004 | B2 |
6752733 | Rogers et al. | Jun 2004 | B2 |
6804997 | Schwulst | Oct 2004 | B1 |
6955081 | Schwulst | Oct 2005 | B2 |
6973792 | Hicks | Dec 2005 | B2 |
6990855 | Tuken et al. | Jan 2006 | B2 |
7013863 | Shiraishi et al. | Mar 2006 | B2 |
7121253 | Shiraishi et al. | Oct 2006 | B2 |
7143580 | Ge | Dec 2006 | B2 |
7225793 | Schwulst et al. | Jun 2007 | B2 |
7325532 | Dölker | Feb 2008 | B2 |
7392129 | Hill et al. | Jun 2008 | B2 |
7460958 | Walsh et al. | Dec 2008 | B2 |
7488357 | Tavlarides et al. | Feb 2009 | B2 |
7542842 | Hill et al. | Jun 2009 | B2 |
8155868 | Xing et al. | Apr 2012 | B1 |
8196686 | Grieve | Jun 2012 | B2 |
8291587 | St. Mary | Oct 2012 | B2 |
8384397 | Bromberg et al. | Feb 2013 | B2 |
8418462 | Piper | Apr 2013 | B2 |
8442729 | Tsukada et al. | May 2013 | B2 |
8514061 | Wagner | Aug 2013 | B2 |
8534401 | Dimitrov et al. | Sep 2013 | B2 |
8539764 | Howard | Sep 2013 | B2 |
8608620 | Kim et al. | Dec 2013 | B2 |
8640437 | Brostmeyer | Feb 2014 | B1 |
8955474 | Derbin et al. | Feb 2015 | B1 |
8996290 | Robl et al. | Mar 2015 | B2 |
9260838 | Sawada et al. | Feb 2016 | B2 |
9266542 | Daum et al. | Feb 2016 | B2 |
9267454 | Wilcutts et al. | Feb 2016 | B2 |
9371629 | Kim | Jun 2016 | B2 |
9399185 | Bromberg et al. | Jul 2016 | B2 |
9424521 | Bloomquist et al. | Aug 2016 | B2 |
9441532 | Pegg et al. | Sep 2016 | B2 |
9512794 | Serrano et al. | Dec 2016 | B2 |
9574492 | Owens | Feb 2017 | B2 |
9586805 | Shock | Mar 2017 | B1 |
9592964 | Göllü | Mar 2017 | B2 |
9637111 | Nikovski et al. | May 2017 | B2 |
9638537 | Abramson et al. | May 2017 | B2 |
9674880 | Egner et al. | Jun 2017 | B1 |
9711050 | Ansari | Jul 2017 | B2 |
9764732 | Kim et al. | Sep 2017 | B2 |
9775562 | Egner et al. | Oct 2017 | B2 |
9790080 | Shock | Oct 2017 | B1 |
9792259 | Heinz et al. | Oct 2017 | B2 |
9792575 | Khasis | Oct 2017 | B2 |
9815683 | Kalala et al. | Nov 2017 | B1 |
9819296 | Bailey et al. | Nov 2017 | B2 |
9836056 | Ansari | Dec 2017 | B2 |
9882987 | Kodaypak et al. | Jan 2018 | B2 |
9889840 | Jeong et al. | Feb 2018 | B2 |
9904264 | Mummigatti | Feb 2018 | B2 |
9904900 | Cao | Feb 2018 | B2 |
9906381 | Mummigatti | Feb 2018 | B2 |
9923124 | Mazed et al. | Mar 2018 | B2 |
9932220 | Shock | Apr 2018 | B1 |
9932925 | Ra | Apr 2018 | B2 |
9946262 | Ansari | Apr 2018 | B2 |
9981840 | Shock | May 2018 | B2 |
9984134 | Imai et al. | May 2018 | B2 |
9992701 | Egner et al. | Jun 2018 | B2 |
10007513 | Malladi et al. | Jun 2018 | B2 |
10014812 | Bailey et al. | Jul 2018 | B2 |
10034066 | Tran et al. | Jul 2018 | B2 |
10056008 | Sweany et al. | Aug 2018 | B1 |
10075834 | Kodaypak et al. | Sep 2018 | B1 |
10087065 | Shock | Oct 2018 | B2 |
10089370 | Imai et al. | Oct 2018 | B2 |
10089610 | Chow et al. | Oct 2018 | B2 |
10091276 | Bloomquist et al. | Oct 2018 | B2 |
10106237 | Kaiser et al. | Oct 2018 | B2 |
10111272 | Withers et al. | Oct 2018 | B1 |
10422290 | Liao-McPherson | Sep 2019 | B1 |
20010015194 | Shiraishi et al. | Aug 2001 | A1 |
20010032617 | Kawakami | Oct 2001 | A1 |
20020055815 | Ju | May 2002 | A1 |
20020144671 | Shiraishi et al. | Oct 2002 | A1 |
20030139248 | Rogers et al. | Jul 2003 | A1 |
20040011325 | Benson et al. | Jan 2004 | A1 |
20040125216 | Keskar et al. | Jul 2004 | A1 |
20040134268 | Tuken et al. | Jul 2004 | A1 |
20040155468 | Yang | Aug 2004 | A1 |
20040159721 | Shiraishi et al. | Aug 2004 | A1 |
20050039526 | Schwulst | Feb 2005 | A1 |
20050169743 | Hicks | Aug 2005 | A1 |
20060086089 | Ge | Apr 2006 | A1 |
20060107586 | Tavlarides et al. | May 2006 | A1 |
20060118079 | Shiraishi et al. | Jun 2006 | A1 |
20060118086 | Schwulst et al. | Jun 2006 | A1 |
20060155486 | Walsh et al. | Jul 2006 | A1 |
20070073467 | Hill et al. | Mar 2007 | A1 |
20070142997 | Dolker | Jun 2007 | A1 |
20080034720 | Helfrich et al. | Feb 2008 | A1 |
20080047272 | Schoell | Feb 2008 | A1 |
20080306636 | Caspe-Detzer et al. | Dec 2008 | A1 |
20080306674 | Hill et al. | Dec 2008 | A1 |
20090017987 | Satou et al. | Jan 2009 | A1 |
20090320461 | Morinaga et al. | Dec 2009 | A1 |
20100018479 | Hu | Jan 2010 | A1 |
20100018480 | Hu | Jan 2010 | A1 |
20100101409 | Bromberg et al. | Apr 2010 | A1 |
20100138118 | Tsukada et al. | Jun 2010 | A1 |
20100206721 | Snidvongs | Aug 2010 | A1 |
20100313418 | St. Mary | Dec 2010 | A1 |
20100318247 | Kumar | Dec 2010 | A1 |
20110088386 | Howard | Apr 2011 | A1 |
20110148614 | Wagner | Jun 2011 | A1 |
20110282561 | Mitani et al. | Nov 2011 | A1 |
20110283695 | Piper | Nov 2011 | A1 |
20120022734 | Choi et al. | Jan 2012 | A1 |
20120191280 | Ohno | Jul 2012 | A1 |
20120221227 | Alfieri et al. | Aug 2012 | A1 |
20130125745 | Bromberg et al. | May 2013 | A1 |
20130151115 | Lee | Jun 2013 | A1 |
20130160744 | Giovenga | Jun 2013 | A1 |
20140007574 | Pegg et al. | Jan 2014 | A1 |
20140039768 | Sawada et al. | Feb 2014 | A1 |
20140041626 | Wilcutts et al. | Feb 2014 | A1 |
20140165561 | Kingsbury | Jun 2014 | A1 |
20140290595 | Owens | Oct 2014 | A1 |
20140336905 | Kim | Nov 2014 | A1 |
20150046060 | Nikovski et al. | Feb 2015 | A1 |
20150169714 | Imai et al. | Jun 2015 | A1 |
20150233279 | Derbin et al. | Aug 2015 | A1 |
20150293981 | Imai et al. | Oct 2015 | A1 |
20150339586 | Adjaoute | Nov 2015 | A1 |
20150344036 | Kristinsson et al. | Dec 2015 | A1 |
20150381648 | Mathis | Dec 2015 | A1 |
20160016525 | Chauncey et al. | Jan 2016 | A1 |
20160107650 | Jeong et al. | Apr 2016 | A1 |
20160108805 | Ferguson et al. | Apr 2016 | A1 |
20160117785 | Lerick et al. | Apr 2016 | A1 |
20160137208 | Powers et al. | May 2016 | A1 |
20160159339 | Cho et al. | Jun 2016 | A1 |
20160159364 | Wilcutts et al. | Jun 2016 | A1 |
20160160786 | Ra | Jun 2016 | A1 |
20160196527 | Bose et al. | Jul 2016 | A1 |
20160201586 | Serrano et al. | Jul 2016 | A1 |
20160216130 | Abramson et al. | Jul 2016 | A1 |
20160217381 | Bloomquist et al. | Jul 2016 | A1 |
20160221578 | Tang et al. | Aug 2016 | A1 |
20160269436 | Danielson et al. | Sep 2016 | A1 |
20160288782 | Kim et al. | Oct 2016 | A1 |
20160334767 | Mummigatti | Nov 2016 | A1 |
20160337198 | Mummigatti | Nov 2016 | A1 |
20160337441 | Bloomquist et al. | Nov 2016 | A1 |
20160349330 | Barfield, Jr. et al. | Dec 2016 | A1 |
20160357187 | Ansari | Dec 2016 | A1 |
20160357188 | Ansari | Dec 2016 | A1 |
20160357262 | Ansari | Dec 2016 | A1 |
20160358084 | Bloomquist et al. | Dec 2016 | A1 |
20160358477 | Ansari | Dec 2016 | A1 |
20160362096 | Nikovski et al. | Dec 2016 | A1 |
20160364678 | Cao | Dec 2016 | A1 |
20160364679 | Cao | Dec 2016 | A1 |
20160364812 | Cao | Dec 2016 | A1 |
20160364823 | Cao | Dec 2016 | A1 |
20170018688 | Mazed et al. | Jan 2017 | A1 |
20170022015 | Göllü | Jan 2017 | A1 |
20170034644 | Chennakeshu | Feb 2017 | A1 |
20170037790 | Kim et al. | Feb 2017 | A1 |
20170046669 | Chow et al. | Feb 2017 | A1 |
20170051689 | Serrano et al. | Feb 2017 | A1 |
20170060567 | Kim et al. | Mar 2017 | A1 |
20170060574 | Malladi et al. | Mar 2017 | A1 |
20170080931 | D'Amato | Mar 2017 | A1 |
20170142204 | Kodaypak et al. | May 2017 | A1 |
20170151928 | Kang et al. | Jun 2017 | A1 |
20170159556 | Owens | Jun 2017 | A1 |
20170176958 | Binotto et al. | Jun 2017 | A1 |
20170177546 | Heinz et al. | Jun 2017 | A1 |
20170184315 | Nolan et al. | Jun 2017 | A1 |
20170185956 | Göllü | Jun 2017 | A1 |
20170198458 | Cho et al. | Jul 2017 | A1 |
20170200324 | Chennakeshu | Jul 2017 | A1 |
20170208540 | Egner et al. | Jul 2017 | A1 |
20170211453 | Sappok et al. | Jul 2017 | A1 |
20170214760 | Lee et al. | Jul 2017 | A1 |
20170234691 | Abramson et al. | Aug 2017 | A1 |
20170238346 | Egner et al. | Aug 2017 | A1 |
20170260920 | Nakada | Sep 2017 | A1 |
20170262790 | Khasis | Sep 2017 | A1 |
20170262820 | Al Salah | Sep 2017 | A1 |
20170269599 | Ansari | Sep 2017 | A1 |
20170272972 | Egner et al. | Sep 2017 | A1 |
20170279957 | Abramson et al. | Sep 2017 | A1 |
20170286572 | Hershey et al. | Oct 2017 | A1 |
20170287335 | Ansari | Oct 2017 | A1 |
20170318360 | Tran et al. | Nov 2017 | A1 |
20170323249 | Khasis | Nov 2017 | A1 |
20170328679 | Smith | Nov 2017 | A1 |
20170328680 | Smith | Nov 2017 | A1 |
20170328681 | Smith | Nov 2017 | A1 |
20170328682 | Smith | Nov 2017 | A1 |
20170328683 | Smith | Nov 2017 | A1 |
20170344620 | Modarresi | Nov 2017 | A1 |
20180005178 | Göllü | Jan 2018 | A1 |
20180017405 | Chen et al. | Jan 2018 | A1 |
20180020477 | Neubacher | Jan 2018 | A1 |
20180023489 | Webb et al. | Jan 2018 | A1 |
20180025408 | Xu et al. | Jan 2018 | A1 |
20180025430 | Perl et al. | Jan 2018 | A1 |
20180032836 | Hurter | Feb 2018 | A1 |
20180038703 | Verma et al. | Feb 2018 | A1 |
20180047107 | Perl et al. | Feb 2018 | A1 |
20180054376 | Hershey et al. | Feb 2018 | A1 |
20180073459 | Han et al. | Mar 2018 | A1 |
20180075380 | Perl et al. | Mar 2018 | A1 |
20180091506 | Chow et al. | Mar 2018 | A1 |
20180095470 | Ansari | Apr 2018 | A1 |
20180097883 | Chow et al. | Apr 2018 | A1 |
20180099855 | Kalala et al. | Apr 2018 | A1 |
20180099858 | Shock | Apr 2018 | A1 |
20180099862 | Shock | Apr 2018 | A1 |
20180099863 | Shock | Apr 2018 | A1 |
20180099864 | Shock | Apr 2018 | A1 |
20180101183 | Shock | Apr 2018 | A1 |
20180101184 | Shock | Apr 2018 | A1 |
20180108023 | Stewart et al. | Apr 2018 | A1 |
20180108942 | Oh | Apr 2018 | A1 |
20180121903 | Al Salah | May 2018 | A1 |
20180122234 | Nascimento et al. | May 2018 | A1 |
20180122237 | Nascimento et al. | May 2018 | A1 |
20180137219 | Goldfarb et al. | May 2018 | A1 |
20180158020 | Khasis | Jun 2018 | A1 |
20180171592 | Yun et al. | Jun 2018 | A1 |
20180176329 | Chen et al. | Jun 2018 | A1 |
20180176663 | Damaggio | Jun 2018 | A1 |
20180176664 | Damaggio | Jun 2018 | A1 |
20180183661 | Wouhaybi et al. | Jun 2018 | A1 |
20180188704 | Cella et al. | Jul 2018 | A1 |
20180188714 | Cella et al. | Jul 2018 | A1 |
20180188715 | Cella et al. | Jul 2018 | A1 |
20180189332 | Asher et al. | Jul 2018 | A1 |
20180189344 | Akwule et al. | Jul 2018 | A1 |
20180189659 | Manna | Jul 2018 | A1 |
20180189717 | Cao | Jul 2018 | A1 |
20180195254 | Yun et al. | Jul 2018 | A1 |
20180197418 | Chu et al. | Jul 2018 | A1 |
20180202379 | Nagashima et al. | Jul 2018 | A1 |
20180210425 | Cella et al. | Jul 2018 | A1 |
20180210426 | Cella et al. | Jul 2018 | A1 |
20180210427 | Cella et al. | Jul 2018 | A1 |
20180215380 | Devi | Aug 2018 | A1 |
20180218452 | Guensler et al. | Aug 2018 | A1 |
20180229998 | Shock | Aug 2018 | A1 |
20180230919 | Nagashima et al. | Aug 2018 | A1 |
20180253073 | Cella et al. | Sep 2018 | A1 |
20180253074 | Cella et al. | Sep 2018 | A1 |
20180253075 | Cella et al. | Sep 2018 | A1 |
20180255374 | Cella et al. | Sep 2018 | A1 |
20180255375 | Cella et al. | Sep 2018 | A1 |
20180255376 | Cella et al. | Sep 2018 | A1 |
20180255377 | Cella et al. | Sep 2018 | A1 |
20180255378 | Cella et al. | Sep 2018 | A1 |
20180255379 | Cella et al. | Sep 2018 | A1 |
20180255380 | Cella et al. | Sep 2018 | A1 |
20180255381 | Cella et al. | Sep 2018 | A1 |
20180255382 | Cella et al. | Sep 2018 | A1 |
20180255383 | Cella et al. | Sep 2018 | A1 |
20180262574 | Choi et al. | Sep 2018 | A1 |
20180270121 | Stringfellow | Sep 2018 | A1 |
20180274927 | Epperlein et al. | Sep 2018 | A1 |
20180279032 | Boesen | Sep 2018 | A1 |
20180284735 | Cella et al. | Oct 2018 | A1 |
20180284736 | Cella et al. | Oct 2018 | A1 |
20180284737 | Cella et al. | Oct 2018 | A1 |
20180284741 | Cella et al. | Oct 2018 | A1 |
20180284742 | Cella et al. | Oct 2018 | A1 |
20180284743 | Cella et al. | Oct 2018 | A1 |
20180284744 | Cella et al. | Oct 2018 | A1 |
20180284745 | Cella et al. | Oct 2018 | A1 |
20180284746 | Cella et al. | Oct 2018 | A1 |
20180284747 | Cella et al. | Oct 2018 | A1 |
20180284749 | Cella et al. | Oct 2018 | A1 |
20180284752 | Cella et al. | Oct 2018 | A1 |
20180284753 | Cella et al. | Oct 2018 | A1 |
20180284754 | Cella et al. | Oct 2018 | A1 |
20180284755 | Cella et al. | Oct 2018 | A1 |
20180284756 | Cella et al. | Oct 2018 | A1 |
20180284757 | Cella et al. | Oct 2018 | A1 |
20180284758 | Cella et al. | Oct 2018 | A1 |
20180288586 | Tran et al. | Oct 2018 | A1 |
20180288641 | Mildh et al. | Oct 2018 | A1 |
20180290877 | Shock | Oct 2018 | A1 |
20180293816 | Garrett et al. | Oct 2018 | A1 |
20180299878 | Cella et al. | Oct 2018 | A1 |
20180300124 | Malladi et al. | Oct 2018 | A1 |
20180308371 | Cao et al. | Oct 2018 | A1 |
20180355811 | Li | Dec 2018 | A1 |
20190100217 | Livshiz | Apr 2019 | A1 |
Entry |
---|
Ahmed et al., “A Survey on Mobile Edge Computing,” 10th IEEE International Conference on intelligent Systems and Control(ISCO'16), India. (8 pages). |
Atkinson et al., “Dynamic Model-Based Calibration Optimization: An Introduction and Application to Diesel Engines,” SAE Technical Paper Series, 2005-01-0026, 2005. (15 pages). |
Atkinson et al., “Using Model-Based Rapid Transient Calibration to Reduce Fuel Consumption and Emissions in Diesel Engines,” SAE Technical Paper Series, 2008-01-1365, 2008, (18 pages). |
Augustin et al., “On Quantile Quantile plots for Generalized linear models,” Computational Statistics & Data Analysis, 56(8):2404-2409, 2012, (14 pages). |
Cuadrado-Cordero, “Microclouds: an approach for a network-aware energy-efficient decentralised cloud,” HAL archives-ouvertes, PhD thesis submitted Mar. 29, 2017. (152 pages). |
Edge Computing—Microsoft Research, Oct. 29, 2008 Retrieved Sep. 24, 2018. (4 pages). |
Fahrmeir et al., “Bayesian inference for generalized additive mixed models based on Markov random field priors,” Appl. Statist 50(2):201-220, 2001. |
Felde, “On edge architecture,” Blog of Christian Felde Technology, computers and quant finance, Dec. 20, 2017. (2 pages) https://blog.cfelde.com/2017/12/on-edge-architecture/. |
Gai et al., “Dynamic energy-aware cloudlet-based mobile cloud computing model for green computing,” Journal of Network and Computer Applications 59:46-54, 2016,. |
Gu et al., “Minimizing GCV/GML Scores With Multiple Smoothing Parameters Via the Newton Method,” Siam J. Sci. Stat. Comput. 12(2):383-398, 1991. |
Gu, “Smoothing Spline ANOVA Models: R Package gss,” Journal of Statistical Software, 58(5), 2014. (25 pages). |
Johnson et al., “HEV Control Strategy for Real-Time Optimization of Fuel Economy and Emissions,” SAE Technical Paper No. 2000-01-1543, 2000. (15 pages). |
Junker, “Additive models and cross-validation,” 36-490, Mar. 22, 2010. (10 pages). |
Kim et al., “Smoothing Spline Gaussian Regression: More Scalable Computation via Efficient Approximation,” Journal of the royal Statistical Society, Series B. 66:337-356,2004. |
Knafl et al., “Dual-Use Engine Calibration,” SAE Technical Paper No. 2005-01-1549, 2005. (15 pages). |
Kumar et al., “Multi-objective modeling of production and pollution routing problem with time window: A self-learning particle swarm optimization approach,” Computers & Industrial Engineering 99:29-40, 2016. |
Lopez et al., “Edge-centric Computing: Vision and Challenges,” ACM SIGCOMM Computer Communication Review, 45(5):31-42, 2015. |
Marra et al., “Coverage Properties of Confidence Intervals for Generalized Additive Model Components,” Scandinavian Journal of Statistics, 39(1):53-74, 2012. (25 pages). |
Mobile-Edge Computing—Introductory Technical White Paper, ETSI, Issue 1, 2014. (36 pages). |
Nelder et al., “Generalized Linear Models,” Journal of the Royal Statistical Society. Series A (General), 135(3):370-384, 1972. (16 pages). |
Payo et al., “Control Applied to a Reciprocating Internal Combustion Engine Test Bench under Transient Operation: Impact on Engine Performance and Pollutant Emissions,” Energies 10:1690, 2017. (17 pages). |
Rask et al., “Simulation-Based Engine Calibration: Tools, Techniques, and Applications,” No. 2004-01-1264, SAE Technical Paper, 2004. (14 pages). |
Reiss et al., “Smoothing Parameter Selection for a Class of Semiparametric Linear Models,” Journal of the Royal Statistical Society, Series B. 71:505-523, 2009. (34 pages). |
Rigby et al., “Generalized additive models for location, scale and shape,” Appl. Statist. 54(3): 507-554, 2005. |
Rue et al., “Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations,” J. R. Statist. Soc. B. 71(2):319-392, 2009. |
Schmid et al., “Boosting Additive Models using Component-wise P-Splines,” Computational statistics and Data Analysis 53:298-311, 2008. |
Schumaker, “Spline Models for Observational Data,” SIAM Rev. 33(3):502, 1991. |
Senn, “A Conversation with John Nelder,” Statistical Science 18(1):118-131, 2003. |
Serrano et al., “Analysis of the capabilities of a two-stage turbocharging system to fulfil the US2007 anti-pollution directive for heavy duty diesel engines,” International Journal of Automotive Technology 9(3):277-288, 2008. |
Silverman, “Some Aspects of the Spline Smoothing Approach to Non-Parametric Regression Curve Fitting,” Journal of the Royal Statistical Society. Series B (Methodological), 47(1), 1985. (53 pages). |
Skala et al., “Scalable Distributed Computing Hierarchy: Cloud, Fog and Dew Computing,” Open Journal of Cloud Computing (OJCC) 2(1):16-24, 2015. |
Traver et al., “Neural Network-Based Diesel Engine Emissions Prediction Using In-Cylinder Combustion Pressure,” SAE Technical Paper, No. 1999-01-1532, 1999. (18 pages). |
Umlauf et al., “Structured Additive Regression Models: An R Interface to BayesX,” Journal of Statistical Software 63(21), 2012. (49 pages). |
Wahba, “Bayesian “Confidence Intervals” for the Cross-validated Smoothing Spline,” J.R. Statist. Soc. B 45(1):133-150, 1983. |
Wood, “Modelling and smoothing parameter estimation with multiple quadratic penalties,” J.R. Statist. Soc. B 62(2):413-428, 2000. (18 pages). |
Wood, “Fast stable direct fitting and smoothness selection for Generalized Additive Models,” Journal of the Royal Statistical Society, Series B. 70(3):495-518, 2008. |
Wood, “Fast stable REML and ML estimation of semiparametric GLMs,” Journal of the Royal Statistical Society, Series B. 73:3-36, 2011. |
Yuan et al., “Multi-sliding surface control for the speed regulation system of ship diesel engines,” Transactions of the Institute of Measurement and Control 40(1):22-34, 2018. |
Zeger et al., “Models for Longitudinal Data: A Generalized Estimating Equation Approach,” Biometrics 44(4):1049-1060, 1988. |
International Search Report and Written Opinion for PCT Application No. PCT/US2019/060618, dated Jan. 17, 2020, 18 pages. |
Number | Date | Country | |
---|---|---|---|
20200401743 A1 | Dec 2020 | US |
Number | Date | Country | |
---|---|---|---|
62758385 | Nov 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16678991 | Nov 2019 | US |
Child | 17011727 | US |