The present invention relates to a travel assistance method and a travel assistance device of a vehicle.
Japanese Patent Laid-Open Publication No. 2016-216021 discloses that a travel history for each driver at the time of manual driving is managed, and at the time of autonomous-driving, a driving style suitable for each individual is provided with respect to a plurality of drivers.
However, in the example disclosed in Japanese Patent Laid-Open Publication No. 2016-216021, a sensor for performing face recognition and fingerprint recognition is required for identifying a driver who is performing driving at the time of manual driving. Meanwhile, there is a method for identifying a driver based on a switch operation by the driver without using a sensor as described above for identifying an individual. However, when the driver forgets to turn on the switch or there is a setting omission, the method cannot handle the situation.
The present invention has been made in view of such a problem. It is an object of the present invention to provide a travel assistance method and a travel assistance device of a vehicle that identifies a driver without requiring a sensor for identifying a driver or redundant operations.
In order to solve the above problem, a travel assistance method and a travel assistance device according to one aspect of the present invention identifies a driver by using driving characteristics during manual driving by a driver and executes travel control corresponding to the identified driver.
According to the present invention, because a driver can be identified by using driving characteristics during manual driving, appropriate travel assistance suitable for the driver can be performed.
Embodiments of the present invention are described below with reference to the accompanying drawings.
[Configuration of Driving Control System]
The travel assistance device 11 is a controller that learns driving characteristics (learning of driving characteristics) based on predetermined learning target data, of pieces of travel data acquired during manual driving by a driver, in a vehicle capable of switching between manual driving by a driver and autonomous-driving, and performs processing to apply the learning result to travel control of autonomous-driving.
Further, in the present embodiment, a case where the travel assistance device 11 is mounted on a vehicle is described. However, a communication device can be installed in a vehicle and a part of the travel assistance device 11 can be installed in an external server so that the external server performs processing to learn driving characteristics of drivers. When the travel assistance device 11 is mounted on a vehicle, driving characteristics of a driver who owns or uses the vehicle can be learned. Pieces of learning target data during a predetermined period (for example, the latest one month) can be stored so as to be reflected in autonomous-driving of the vehicle owned or used by the driver. On the other hand, when the travel assistance device 11 is installed in an external server, since learning can be performed by using learning target data of the driver himself for a long period of time, a more stable learning result can be calculated. Further, when learning has not been completed yet, by utilizing pieces of learning target data of other drivers, driving characteristics of an average driver in the area can be reflected in autonomous-driving.
The travel-status detection unit 21 detects travel data indicating a travel state of a vehicle, such as a vehicle velocity and a steering angle, an acceleration rate, an inter-vehicular distance from a preceding vehicle, a relative velocity with respect to the preceding vehicle, a current position, a display state of a direction indicator, a lighting state of a headlight, and an operating condition of wipers. For example, as the travel-status detection unit 21, a sensor provided in a brake pedal or an accelerator pedal, a sensor that acquires the behavior of a vehicle such as a wheel sensor and a yaw-rate sensor, a laser radar, a camera, an in-vehicle network such as a CAN (Controller Area Network) that communicates data acquired from sensors thereof, and a navigation device are included.
The surrounding-status detection unit 22 detects environmental information representing an environment in which a vehicle is traveling, such as the number of lanes, a speed limit, a road grade, and a road curvature of a road on which the vehicle is traveling, a display state of a traffic light in front of the vehicle, a distance to an intersection in front of the vehicle, the number of vehicles that are traveling in front of the vehicle, an expected course at an intersection in front of the vehicle, and the presence of a temporary stop regulation. For example, a camera, a laser radar, and a navigation device mounted on a vehicle are included in the surrounding-status detection unit 22. The display state of a traffic light in front of the vehicle and the presence of a temporary stop regulation can be detected by using road-to-vehicle communication. The number of vehicles that are traveling in front of the vehicle can be detected by using a cloud service cooperated with vehicle-to-vehicle communication and a smartphone. The expected course at an intersection in front of the vehicle is acquired from the navigation device, a display state of the direction indicator, or the like. Further, the illuminance, temperature, and weather conditions around the vehicle are respectively acquired from an illuminance sensor, an outside temperature sensor, and a wiper switch. However, the illuminance can be acquired from a headlight switch.
The driving changeover switch 23 is a switch mounted on a vehicle to switch between autonomous-driving and manual driving, which is operated by an occupant of the vehicle. For example, it is a switch installed in a steering of the vehicle.
The control-state presentation unit 61 displays whether the current control state is manual driving or autonomous-driving on a meter display unit, a display screen of the navigation device, a head-up display, and the like. Further, the control-state presentation unit 61 outputs a notification sound informing start and end of autonomous-driving, and presents whether learning of driving characteristics has been completed.
The actuator 31 receives an execution command from the travel assistance device 11 to drive respective units such as an accelerator, a brake, and a steering of the vehicle.
Next, respective units constituting the travel assistance device 11 are described. The travel assistance device 11 includes a learning-target data storage unit 41, a driving-characteristics learning unit 42, a driver identification unit 43, and an autonomous-driving control execution unit 45.
The learning-target data storage unit 41 acquires travel data relating to the travel state of the vehicle and pieces of environmental information relating to the travel environment around the vehicle from the travel-status detection unit 21, the surrounding-status detection unit 22, and the driving changeover switch 23, and stores therein predetermined learning target data required for learning driving characteristics of a driver in association with travel scenes such as the travel state and the travel environment of the vehicle.
The learning-target data storage unit 41 stores therein the predetermined learning target data required for learning driving characteristics of a driver for each of drivers. That is, the learning-target data storage unit 41 associates the learning target data with drivers, classifies the learning target data for each diver, and stores the learning target data therein.
Identification of a driver associated with the learning target data is performed by the driver identification unit 43 described later. New learning target data input to the learning-target data storage unit 41 from the travel-status detection unit 21, the surrounding-status detection unit 22, and the driving changeover switch 23 is temporarily stored in the learning-target data storage unit 41 as unregistered learning target data, during a period until identification of a driver associated with the learning target data is performed by the driver identification unit 43. Further, after identification of a driver associated with the learning target data has been performed by the driver identification unit 43, the learning target data is registered in the learning-target data storage unit 41 as learning target data corresponding to the driver identified by the driver identification unit 43. As a result, the learning target data becomes learning target data registered in the learning-target data storage unit 41. It suffices that the timing to identify the driver is a timing at which the driver can be identified such as a timing after driving 3 kilometers, a timing after driving for 10 minutes, and a timing after having acquired a predetermined amount of data (a timing after having acquired a predetermined amount of data such as 100 plots or 1 kilobyte).
The learning-target data storage unit 41 may store therein a deceleration timing during manual driving by a driver. The learning-target data storage unit 41 may store therein a deceleration timing in a case of stopping at a stop position such as a stop line set at an intersection or the like, a deceleration timing in a case of stopping behind a preceding vehicle being stopping, or a deceleration timing in a case of traveling following the preceding vehicle. Further, the learning-target data storage unit 41 may store therein the behavior of the vehicle at the time of operating the brake, such as a brake operating position, which is a position at which the brake is operated with respect to a stop position, a distance with respect to the stop position, a vehicle velocity at the time of operating the brake, and an acceleration rate.
The “deceleration timing” includes a timing when a driver operates the brake (a brake pedal) and the brake is operated at the time of stopping a vehicle at the stop position, a timing when deceleration actuates on the vehicle, a timing when an operation of the accelerator ends, or a timing when an operation of the brake pedal is started. Alternatively, the “deceleration timing” may include a timing when an operation amount of the brake pedal (depression amount) by a driver becomes equal to or larger than a predetermined amount set in advance, or a timing when an operation amount of the accelerator pedal (depression amount) by a driver becomes equal to or smaller than a predetermined amount set in advance. Alternatively, the “deceleration timing” may include a timing when a driver operates the brake and a control amount at the time of operating the brake has reached a certain value set in advance, or a timing when an increasing rate of the control amount at the time of operating the brake has reached a certain value.
That is, a timing when a control amount of the brake or an increasing rate of the control amount has reached a certain value, although not having reached the predetermined deceleration by the brake operation, may be set as the “deceleration timing”. That is, the “deceleration timing” is a concept including a timing when the brake is operated (a brake start timing), an accelerator-off timing (a brake start timing), a timing when the control amount of the brake has reached a certain value, and a timing when the increasing rate of the control amount of the brake has reached a certain value. In other words, it is a timing when a driver feels a brake operation.
The brake in the present embodiment includes a hydraulic brake, an electronic control brake, and a regenerative brake. It can also include a deceleration actuating state even if the hydraulic brake, the electronic control brake, or the regenerative brake is not being operated.
Further, the learning-target data storage unit 41 may store therein an inter-vehicular distance between a vehicle and a preceding vehicle during manual driving by a driver. The learning-target data storage unit 41 may store therein pieces of data other than the inter-vehicular distance such as an inter-vehicular distance during stop, a relative velocity with respect to the preceding vehicle, a steering angle, a deceleration rate, and a duration time while following the preceding vehicle.
Further, the learning-target data storage unit 41 may store therein a deceleration start speed when a vehicle stops at an intersection, a braking distance when a vehicle stops at an intersection, and the like. Further, the learning-target data storage unit 41 may store therein pieces of data such as an operation amount of the brake pedal and the accelerator pedal of a vehicle, a vehicle velocity and a deceleration rate, and a distance to a stop line at an intersection, during a deceleration operation.
The learning-target data storage unit 41 may store therein environmental information in which a vehicle is placed, other than these pieces of information. As the environmental information, the number of lanes, a road curvature, a speed limit, a road grade, and the presence of a temporary stop regulation of a road on which the vehicle is traveling, a display state of a traffic light, a distance from the vehicle to an intersection, the number of vehicles that are traveling in front of the vehicle, a display state of a direction indicator, the weather, temperature, or illuminance around the vehicle, and the like can be mentioned.
The driving-characteristics learning unit 42 reads learning target data stored in the learning-target data storage unit 41 and learns the driving characteristics of a driver corresponding to the learning target data, taking into consideration the travel state and the influence degree from the travel environment. The driving-characteristics learning unit 42 learns the driving characteristics for each of the learning target data based on the learning target data (unregistered learning target data and registered learning target data) stored in the learning-target data storage unit 41. The driving-characteristics learning unit 42 associates learning results calculated in this manner with drivers, classifies the learning results for each driver, and stores the learning results therein.
Identification of a driver associated with the learning result is performed by the driver identification unit 43 described later. The learning result newly calculated by the driving-characteristics learning unit 42 is temporarily stored in the driving-characteristics learning unit 42 as an unregistered learning result, during a period until the driver identification unit 43 identifies a driver to be associated with the learning result. Further, after the driver identification unit 43 has identified a driver to be associated with the learning result, the learning result is registered in the driving-characteristics learning unit 42 as the learning result corresponding to the driver identified by the driver identification unit 43. As a result, the learning result becomes a learning result registered in the driving-characteristics learning unit 42.
Learning performed by the driving-characteristics learning unit 42 may be performed on a real time basis simultaneously with storage of the learning target data in the learning-target data storage unit 41. Alternatively, the learning performed by the driving-characteristics learning unit 42 may be performed every predetermined time, or at a timing when a certain amount of learning target data has been accumulated in the learning-target data storage unit 41.
The driver identification unit 43 identifies a driver based on an unregistered learning result temporarily stored in the learning-target data storage unit 41. Specifically, the driver identification unit 43 compares the unregistered learning result stored in the learning-target data storage unit 41 with a registered learning result.
As a result of comparison by the driver identification unit 43, when a registered learning result having driving characteristics with a difference from the driving characteristics in the unregistered learning result being within a predetermined value has been found, the driver identification unit 43 identifies that a driver corresponding to the unregistered learning result is the same person as the driver in the registered learning result.
As a result of comparison by the driver identification unit 43, when a registered learning result having driving characteristics with a difference from the driving characteristics in the unregistered learning result being within a predetermined value has not been found, the driver identification unit 43 identifies that a driver corresponding to the unregistered learning result is a new driver (a driver who does not correspond to any driver having been registered).
When a learning result of a new driver is to be registered in the driving-characteristics learning unit 42, an approval with respect to registration of a driver may be requested to an occupant. This request can be made by using an in-vehicle display, or by using a speaker. After a request is made to the occupant, selection by the occupant may be received by a touch input on a display or by recognizing the occupant's voice by a microphone.
When a learning result of a new driver is to be registered in the driving-characteristics learning unit 42, it may be requested to the occupant to input information identifying a driver. This request may be made by using an in-vehicle display, or by using a speaker. After a request is made to the occupant, selection by the occupant may be received by a touch input on the display or by recognizing the occupant's voice by the microphone.
As a result of comparison by the driver identification unit 43, when a plurality of registered learning results having driving characteristics with a difference from the driving characteristics in the unregistered learning result being within a predetermined value have been found, the driver identification unit 43 requests the occupant to select any of the drivers corresponding to the found learning results. This request may be made by using an in-vehicle display, or by using a speaker. After a request is made to the occupant, selection by the occupant may be received by a touch input on the display or by recognizing the occupant's voice by the microphone.
The autonomous-driving control execution unit 45 executes autonomous-driving control when a vehicle travels in an autonomous-driving section or when a driver selects autonomous-driving by the driving changeover switch 23. At this time, the autonomous-driving control execution unit 45 applies the learning result acquired by the driving-characteristics learning unit 42 to the travel control of autonomous-driving.
The travel assistance device 11 is constituted by a general-purpose electronic circuit including a microcomputer, a microprocessor, and a CPU, and peripheral devices such as a memory. The travel assistance device 11 operates as the learning-target data storage unit 41, the driving-characteristics learning unit 42, the driver identification unit 43, and the autonomous-driving control execution unit 45 which are described above, by executing specific programs. The respective functions of the travel assistance device 11 can be implemented by one or a plurality of processing circuits. The processing circuit includes a programmed processing device such as a processing device including, for example, an electric circuit, and also includes an application specific integrated circuit (ASIC) arranged to execute the functions described in the embodiment and a device such as conventional circuit components.
[Process Procedure for Learning Driving Characteristics]
Next, the process procedure for learning driving characteristics by the travel assistance device 11 according to the present embodiment is described with reference to a flowchart in
As illustrated in
At Step S103, the learning-target data storage unit 41 detects travel data relating to the travel state of the vehicle and environmental information relating to the travel environment around the vehicle from the travel-status detection unit 21, the surrounding-status detection unit 22, and the driving changeover switch 23. As the detected travel data, a vehicle velocity, a steering angle, an acceleration rate, a deceleration rate, an inter-vehicular distance from a preceding vehicle, a relative velocity with respect to the preceding vehicle, a current position, an expected course at an intersection in front of the vehicle, operation amounts of a brake pedal and an accelerator pedal, a duration time while following the preceding vehicle, a lighting state of a headlight, an operating condition of wipers, and the like are detected. Further, the learning-target data storage unit 41 detects, as the environmental information, the number of lanes, a road curvature, a speed limit, a road grade, and the presence of a temporary stop regulation on a road on which the vehicle is traveling, a display state of a traffic light, a distance from the vehicle to an intersection, the number of vehicles that are traveling in front of the vehicle, a display state of a direction indicator, and the weather, temperature, or illuminance around the vehicle. The new learning target data consisting of the travel data and the environmental information is temporarily stored in the learning-target data storage unit 41 as unregistered learning target data.
Next, at Step S105, the driving-characteristics learning unit 42 learns the driving characteristics of the driver corresponding to the learning target data, taking into consideration the travel state and the influence degree from the travel environment based on the learning target data stored in the learning-target data storage unit 41. A learning result acquired based on the unregistered learning target data is temporarily stored in the driving-characteristics learning unit 42 as an unregistered learning result.
Here, the driving-characteristics learning unit 42 creates a regression model (a multiple regression model) to obtain an equation quantitatively representing a relation between two or more kinds of data included in the learning target data, and performs learning by performing a regression analysis (a multiple regression analysis).
As a specific example, a case where data of a vehicle velocity V and an inter-vehicular distance D during a deceleration operation is acquired as the learning target data is considered. It is assumed that N measurement results (V1, D1), (V2, D2), . . . , (VN, DN) have been acquired for a set of two kinds of data of the vehicle velocity V and the inter-vehicular distance D. In the following descriptions, an ith measurement result is noted as (Vi, Di) (where i=1, 2, . . . , N).
It is assumed that a linear model represented by the following equation (1) is established, assuming that β1 and β2 are regression coefficients, the inter-vehicular distance D is an explanatory variable (an independent variable), the vehicle velocity V is an objective variable (a dependent variable, an explained variable).
V=β
1+β2D (1)
An error term εi is defined by the following equation (2), assuming that an error from the regression model in the ith measurement result is εi.
εi=Vi−(β1+β2Di) (where i=1, 2, . . . , N) (2)
In the equation (2), by using a least-squares method in which a square sum S (where S=Σεi2, i=1, 2, . . . , N) of the error term εi is set to minimum, using β1 and β2 as parameters, an equation quantitatively representing a relation between N measurement results relating to the set of two kinds of data of the vehicle velocity V and the inter-vehicular distance D can be estimated. The parameters β1 and β2 when the square sum S of the error term εi is set to minimum are estimated amounts of the regression coefficients β1 and β2 appearing in the equation (1), and are referred to as least squares estimators L1 and L2. By deciding the least squares estimators L1 and L2, a quantitative relation between the vehicle velocity V and the inter-vehicular distance D can be estimated.
A regression residual Ei is defined according to the following equation (3) based on the least squares estimators L1 and L2.
E
i
=V
i−(L1+L2Di) (where i=1, 2, . . . , N) (3)
In the learning target data to be subjected to the regression analysis, when the number N of measurement results is sufficiently large, it is considered that the regression residual Ei follows normal distribution (an average 0, a standard deviation σE). Therefore, the standard deviation of the regression residual Ei is estimated. In the following descriptions, an estimate of the standard deviation σE of the regression residual Ei is designated as a standard error sE. The standard error sE is defined by the following equation (4).
s
E={(ΣEi2)/(N−2)}1/2 (4)
Here, the reason why the square sum (ΣEi2) of the regression residual Ei is divided by (N−2) in the definition of the standard error sE is related with a fact that there are two least squares estimators. In order to maintain the invariance of the standard error sE, the square sum (ΣEi2) is divided by (N−2).
The least squares estimators L1 and L2 are linear functions of the regression residual Ei that is considered to follow the normal distribution, and thus it is considered that the least squares estimator L1 follows the normal distribution (an average β1, a standard deviation σL1), and the least squares estimator L2 follows the normal distribution (an average β2, a standard deviation σL2). Therefore, the standard deviations σL1 and σL2 of the least squares estimators L1 and L2 can be estimated based on the equation (3) and the standard error sE. In the following descriptions, an estimate of the standard deviation σL1 of the least squares estimator L1 is designated as a standard error sL1 and an estimate of the standard deviation σL2 of the least squares estimator L2 is designated as a standard error sL2.
The driving-characteristics learning unit 42 performs learning of the driving characteristics based on the learning target data, by estimating the least squares estimators [L1, L2] and the standard errors [sL1, sL2] as described above. The driving-characteristics learning unit 42 stores therein the acquired least squares estimators [L1, L2] and the standard errors [sL1, sL2] as the driving characteristics relating to the learning result acquired from the learning target data.
The driving-characteristics learning unit 42 may also store therein the number N of pieces of data included in the learning target data that has been used for learning. The driving-characteristics learning unit 42 may further store therein the travel frequency in an area where the vehicle travels, corresponding to the learning target data that has been used for learning.
In the above descriptions, a regression model between the vehicle velocity V and the inter-vehicular distance D is mentioned as an example. However, a similar regression analysis (a multiple regression analysis) may be performed by using not only the vehicle velocity V and the inter-vehicular distance D, but also other two or more pieces of data. In the above descriptions, since the regression analysis is performed between two pieces of data, two values L1 and L2 are acquired as the least squares estimator. Generally, when a regression analysis between M pieces of data is performed, M values [L1, L2, . . . , LM] are acquired as the least squares estimator. Similarly, M values [sL1, sL2, . . . , sLM] are acquired as the standard error corresponding to the least squares estimator.
Further, in the above descriptions, a linear model (linear regression) that assumes a linear relation between pieces of data is mentioned as a regression model. However, other than the linear model, the linear model method described above can be used, so long as it is a model that can be transformed to a linear model by functional transformation or the like. For example, an elastic model in which an explained variable is proportional to a power of an explanatory variable, or an elastic model (exponential regression) in which an explained variable is proportional to an exponential function of an explanatory variable may be used. Alternatively, a linear model, an elastic model, or a combination of elastic models may be used.
In the above descriptions, it is considered that when the number N of measurement results is sufficiently large, the regression residual Ei follows the normal distribution. Generally, however, the regression residual Ei does not always follow the normal distribution. For example, when the number N of measurement results is small (for example, N is less than 30), learning of the driving characteristics may be performed by assuming a distribution other than the normal distribution, matched with the property of data. For example, learning of the driving characteristics may be performed by assuming binominal distribution, Poisson distribution, or uniform distribution other than the normal distribution. Learning of the driving characteristics may be performed by performing non-parametric estimation.
Learning of the driving characteristics may be performed by calculating an output error at the time of inputting training data to a neural network and performing adjustment of various parameters of the neural network so that the error becomes minimum, as in the deep learning (hierarchical learning, machine learning) using the neural network, other than the methods described above.
In the above descriptions, it is assumed to perform learning by using all the measurement results included in the learning target data, however, selection or weighting of measurement results to be used for learning may be performed according to a travel area where a vehicle travels. For example, pieces of frequency information of the route and places (a place of departure, a through location, and a destination) where a vehicle travels is decided based on one or a plurality of pieces of learning target data, and when the measurement result included in the learning target data being learned has been measured in an area having a high travel frequency, contribution of the measurement result to the square sum S of an error term εi to be used in the regression analysis may be set high.
Specifically, the square sum S of the error term εi may be defined as a weighting parameter Wi according to the following equation (5). Here, when selection of the measurement results to be used for learning is to be performed, the weighting parameter Wi takes a value 1 with respect to the measurement result to be used for learning, and the weighting parameter Wi takes a value 0 with respect to the measurement result not to be used for learning. When weighting of measurement results to be used for learning is to be performed, the weighting parameter Wi takes a larger value, as the travel frequency in an area corresponding to the measurement result becomes higher.
S=Σ(Wi·εi2) (5)
By performing selection or weighting of the measurement results to be used for learning according to a travel area where the vehicle travels, as the travel frequency in the area where the vehicle travels becomes higher, the driving characteristics during manual driving by a driver in the area can be learned with a higher degree of priority. As the travel frequency in the area where the vehicle travels becomes higher, it is considered that the driver is used to driving in the area, and it is considered that the driving characteristics of the driver appear strongly in the learning target data.
In the above descriptions, the driving characteristics and the standard error are estimated from the learning target data by the regression analysis. However, a mean value and a standard deviation of the deceleration timing may be estimated respectively as the driving characteristics and the standard error, based on the frequency distribution relating to the deceleration timing (the deceleration timing is plotted on the horizontal axis, and the frequency is plotted on the vertical axis) acquired from the measurement results. Other than this estimation, a mean value and a standard deviation of the inter-vehicular distance may be estimated respectively as the driving characteristics and the standard error, based on the frequency distribution relating to the inter-vehicular distance between a vehicle and a preceding vehicle (the inter-vehicular distance is plotted on the horizontal axis, and the frequency is plotted on the vertical axis) acquired from the measurement results. Further, a mean value and a standard deviation of the vehicle velocity during a deceleration operation may be estimated as the driving characteristics and the standard error based on the frequency distribution (the vehicle velocity is plotted on the horizontal axis, and the frequency is plotted on the vertical axis) acquired from the measurement results.
Next, at Step S107, the driver identification unit 43 identifies a driver based on unregistered learning result temporarily stored in the learning-target data storage unit 41. Specifically, the driver identification unit 43 compares the unregistered learning result with the registered learning results stored in the learning-target data storage unit 41.
As illustrated in
The driver identification unit 43 compares the learning results with each other by conducting a t-test for the driving characteristics.
When the unregistered learning result is to be compared with the learning result of the driver A, the driver identification unit 43 designates a null hypothesis as “LU=LA” and an alternate hypothesis as “LU≠LA”, and defines a two-sample t-statistic defined by the following equation (6).
T
UA
={L
U
−L
A
}/{s
U
2
+s
A
2}1/2 (6)
When the least squares estimator LU and the least squares estimator LA follow the normal distribution, the two-sample t-statistic TUA between the unregistered learning result and the learning result of the driver A follow a t-distribution. The t-distribution has a degree of freedom depending on the learning target data corresponding to the unregistered learning result, the learning target data corresponding to the learning result of the driver A, and the like.
The driver identification unit 43 calculates the two-sample t-statistic TUA and conducts a test with a significance level α=0.05. That is, the level regarded as having a significant difference is set to 5%.
The significance level α may be changed based on the number of measurement results included in the learning target data.
Similarly, the driver identification unit 43 calculates a two-sample t-statistic TUB between the unregistered learning result and the learning result of the driver B and calculates a two-sample t-statistic TUC between the unregistered learning result and the learning result of the driver C.
In this manner, the driver identification unit 43 calculates the two-sample t-statistic between the unregistered learning result and the registered learning result. If the registered learning result has not been stored in the learning-target data storage unit 41, the driver identification unit 43 does not perform comparison between the learning results described above.
Next, at Step S109, the driver identification unit 43 determines whether there is a registered learning result matched with the unregistered learning result.
The driver identification unit 43 rejects the null hypothesis when the calculated two-sample t-statistic TUA becomes a value largely deviated from 0, and particularly, when an absolute value of the two-sample t-statistic TUA becomes a value larger than a percentage point Tα/2 in the t-distribution defined by the significance level α.
Here, the percentage point Tα/2 is a value of the two-sample t-statistic in which an upper probability in the t-distribution becomes α/2. An aggregate (a rejection region) of statistic values to reject the null hypothesis includes both a positive region deviated from 0 and a negative region deviated from 0, and a two-sided test needs to be conducted. Therefore, the upper probability is set to a value half the significance level α.
When the null hypothesis “LU=LA” is rejected, the driver identification unit 43 determines that the unregistered learning result and the learning result of the driver A do not match with each other. Further, the driver identification unit 43 identifies that a driver corresponding to the unregistered learning result is not the driver A.
On the other hand, when the null hypothesis “LU=LA” is adopted (not rejected), the driver identification unit 43 judges that the unregistered learning result and the learning result of the driver A match with each other. Further, the driver identification unit 43 identifies that a driver corresponding to the unregistered learning result is the driver A.
That is, the driver identification unit 43 compares LU representing the driving characteristics in the unregistered learning result with the driving characteristics in the learning result of the driver A, and if a difference between LU and LA is equal to or smaller than a predetermined value, the driver identification unit 43 identifies that a driver corresponding to the unregistered learning result is the driver A in the registered learning result.
Similarly, the driver identification unit 43 determines whether the unregistered learning result and the learning result of the driver B match with each other based on the two-sample t-statistic TUB, and identifies whether a driver corresponding to the unregistered learning result is the driver B. Further, the driver identification unit 43 determines whether the unregistered learning result and the learning result of the driver C match with each other based on the two-sample t-statistic TUC, and identifies whether a driver corresponding to the unregistered learning result is the driver C.
If a registered learning result matched with the unregistered learning result is not found, or a registered learning result has not been stored in the learning-target data storage unit 41, the driver identification unit 43 identifies that a driver corresponding to the unregistered learning result is a new driver (a driver not corresponding to any of the registered drivers).
As a result of comparison by the driver identification unit 43, if there is no registered learning result matched with the unregistered learning result (NO at Step S109), the process proceeds to Step S111, and if there is a registered learning result matched with the unregistered learning result (YES at Step S109), the process proceeds to Step S113.
At Step S111, the learning-target data storage unit 41 registers therein the unregistered learning target data as learning target data corresponding to the new driver. Further, the driving-characteristics learning unit 42 registers the unregistered learning result as a learning result corresponding to the new driver.
At Step S113, as a result of comparison by the driver identification unit 43, if there is only one registered learning result matched with the unregistered learning result (YES at Step S113), the process proceeds to Step S115, and the autonomous-driving control execution unit 45 applies the registered learning result matched with the unregistered learning result to autonomous-driving.
At Step S113, if there are a plurality of registered learning results matched with the unregistered learning result (NO at Step S113), the process proceeds to Step S117, and the control-state presentation unit 61 displays a plurality of driver candidates corresponding to the matched registered learning results.
At Step S119, when one driver is selected among the plurality of driver candidates displayed on the control-state presentation unit 61 by a user of the travel assistance device 11, the autonomous-driving control execution unit 45 applies the registered learning result matched with the unregistered learning result, which is a learning result of the selected driver, to autonomous-driving.
In the above descriptions, the t-test for the driving characteristics is conducted by using one piece of driving characteristics (one least squares estimator) among the driving characteristics included in the learning result. However, the t-test for the driving characteristics may be conducted by combining two or more pieces of driving characteristics. As compared with a case where only one piece of driving characteristics is used, more accurate comparison between learning results and identification of the driver can be performed by combining more pieces of driving characteristics.
At Step S109 described above, when a driver corresponding to the unregistered learning target data is identified, the learning result acquired by performing learning using both the unregistered learning target data and the learning result corresponding to the identified driver may be applied to autonomous-driving, instead of applying the registered learning result to autonomous-driving at Step S115 and Step S119.
That is, at Step S115 and Step S119, the unregistered learning target data may be merged with the learning target data of the identified driver and the learning result based on the newly acquired learning target data may be applied to autonomous-driving. By performing the process, the data size of the learning target data can be increased, and a learning result on which the driving characteristics of the identified driver is strongly reflected can be applied to autonomous-driving.
When the number N of measurement results included in the learning target data corresponding to the unregistered learning result is small (for example, N is less than 30), distribution matched with the learning target data may be decided to calculate a test amount corresponding to the distribution, instead of calculating the two-sample t-statistic that assumes to follow the t-distribution. Alternatively, non-parametric estimation may be performed based on the learning target data to perform comparison between the learning results.
Other than the methods described above, comparison between the learning results may be performed by deep learning (hierarchical learning, machine learning) using a neural network.
For the comparison between the learning results, various methods can be mentioned as described above. Such a method that can reject or adopt the null hypothesis that “learning results match with each other”, by calculating a predetermined probability based on two or more learning results to be compared, and comparing the probability with the significance level, can be used as a comparison method of learning results in the present invention.
[Effects of Embodiments]
As described above in detail, in the travel assistance method according to the present embodiment, in a vehicle capable of switching manual driving by a driver and autonomous-driving, a driver is identified by using driving characteristics during manual driving by a driver, and travel control is executed based on a learning result corresponding to the identified driver. Accordingly, the driver can be identified without requiring a sensor or redundant operations for identifying the driver, and appropriate travel assistance suitable for the driver can be performed.
Particularly, since a driver can be identified based on the driving characteristics during manual driving, instead of using a sensor for identifying a driver such as a sensor for performing face recognition or fingerprint recognition, cost reduction can be achieved as compared with a product in which a sensor for identifying a driver is installed. For example, a cost of about 5000 Yen of the fingerprint authentication sensor based on the mass-produced products can be reduced from the manufacturing cost.
Further, the travel assistance method according to the present embodiment may be such that the driving characteristics during manual driving and the learning result corresponding to a driver are compared with each other, and when a difference between the driving characteristics during manual driving and driving characteristics in the learning result is larger than a predetermined value, the driving characteristics during manual driving is registered as a learning result of a new driver. Accordingly, a driver can be identified accurately based on unique driving characteristics of the driver. Further, an unregistered new driver can be automatically registered without requiring any special operations by the driver.
Further, the travel assistance method according to the present embodiment may request an occupant to provide an approval to registration, when a learning result of a new driver is to be registered. Accordingly, it can be avoided that a new driver who is not intended to be registered by the occupant is registered. Therefore, a travel assistance method meeting the intention of the occupant can be realized and it can be prevented that a new driver is registered by mistake.
Further, the travel assistance method according to the present embodiment may request the occupant to input information that identifies a driver, when a learning result of a new driver is registered. Accordingly, a driver corresponding to the learning result can be set. Therefore, when the learning result is used after the setting, for example, when selection of a driver is requested to the occupant, the occupant can select an appropriate learning result. As the information that identifies a driver, an input of attributes such as age and gender may be requested.
Further, the travel assistance method according to the present embodiment may be such that the driving characteristics during manual driving is compared with a learning result corresponding to a driver, and when a plurality of learning results having driving characteristics in which a difference between the driving characteristics during manual driving and driving characteristics in the learning result is within a predetermined value have been found, selection of a driver from a plurality of drivers corresponding to the found learning results is requested to an occupant. Accordingly, a user can select a driver, among the plurality of drivers corresponding to the found learning results, to be based on at the time of executing travel control of autonomous-driving. Further, it can be avoided that a learning result which is not intended to be used by the user is used.
Further, the travel assistance method according to the present embodiment, as the travel frequency in an area where a vehicle travels becomes higher, may use more preferentially driving characteristics of the area as the driving characteristics during manual driving at the time of identifying a driver. It is considered that as the travel frequency in the area where the vehicle travels becomes higher, the driver is more used to driving in the area, and the driving characteristics of the driver is more strongly reflected in the learning target data. Therefore, by providing the degree of priority based on the travel frequency in the area, a driver can be identified more accurately.
Further, the travel assistance method according to the present embodiment may use a deceleration timing during manual driving, an inter-vehicular distance between a vehicle and a preceding vehicle, a vehicle velocity during a deceleration operation, or a combination thereof as the driving characteristics during manual driving. Among the driving characteristics appearing in travel data of the vehicle, the driving characteristics such as the deceleration timing during manual driving, the inter-vehicular distance between the vehicle and the preceding vehicle, and the vehicle velocity during the deceleration operation are driving characteristics in which the personality of a driver tends to appear as compared with other driving characteristics. Therefore, by using these driving characteristics, the driver can be identified more accurately.
Further, the travel assistance method according to the present embodiment may be such that when there is no registered learning result, identification of a driver based on the learning result is not performed. Therefore, a processing time required for identifying a driver can be decreased, thereby enabling to achieve a high speed as the entire system.
Further, when there is only one registered learning result, for example, when there is only one driver who drives a vehicle on a daily basis, such a case may occur that identification of a driver is not necessary originally. In such a case, it is also possible that identification of a driver based on the learning result is not performed. Therefore, a processing time required for identifying a driver can be decreased, thereby enabling to achieve a high speed as the entire system.
Further, the travel assistance method according to the present embodiment may learn the driving characteristics for each driver by an external server provided outside a vehicle. Accordingly, a processing load of the vehicle can be reduced.
Further, even when a driver uses a plurality of vehicles, learning results from the vehicles are integrated and managed by an external server and the integrated learning results are distributed from the external server to a vehicle that requires travel control of autonomous-driving, so that the integrated learning results can be shared among the vehicles. Accordingly, appropriate travel assistance suitable for a driver can be performed. It is particularly useful to perform processing by the external server, in a case where it is assumed that a driver uses a plurality of vehicles such as car sharing.
Although the contents of the present invention have been described above with reference to the embodiments, the present invention is not limited to these descriptions, and it will be apparent to those skilled in the art that various modifications and improvements can be made. It should not be construed that the present invention is limited to the descriptions and the drawings that constitute a part of the present disclosure. On the basis of the present disclosure, various alternative embodiments, practical examples, and operating techniques will be apparent to those skilled in the art.
In is needless to mention that the present invention also includes various embodiments that are not described herein. Therefore, the technical scope of the present invention is to be defined only by the invention specifying matters according to the scope of claims appropriately obtained from the above descriptions.
Respective functions described in the above respective embodiments may be implemented on one or more processing circuits. The processing circuits include programmed processors such as processing devices and the like including electric circuits. The processing devices include devices such as application specific integrated circuits (ASIC) and conventional circuit constituent elements that are arranged to execute the functions described in the embodiments.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2017/033920 | 9/20/2017 | WO | 00 |