The present invention relates to a moving body prediction device, a learning method, a traffic safety support system, and a storage medium. More specifically, the present invention relates to a moving body prediction device that predicts futures of moving bodies in a traffic area, a learning method for the moving body prediction device, a traffic safety support system that includes the moving body prediction device, and a storage medium.
In public traffic, various traffic participants such as moving bodies including four-wheeled vehicles, motorcycles, bicycles, and the like, and pedestrians move at different speeds on the basis of individual intentions. In recent years, there have been proposed various techniques for improving safety, convenience, and the like of traffic participants in such public traffic (for example, see Patent Documents 1 and 2).
A collision avoidance device disclosed in Patent Document 1 estimates a haste degree of a driver on the basis of a traveling state of a vehicle when the vehicle temporarily stops, and estimates acceleration when the vehicle starts to move, on the basis of the haste degree. The collision avoidance device determines a collision possibility with a moving object that is approaching the vehicle, on the basis of the estimated acceleration, and actuates an operation (e.g., notification to a driver) for avoiding a collision on the basis of a result of the determination.
As in the invention disclosed in Patent Document 1, in order to actuate the operation for avoiding a collision at appropriate timing, it is necessary to predict previously future action of the vehicle with high precision by using a vehicle action prediction device as disclosed in Patent Document 2, for example. In the vehicle action prediction device disclosed in Patent Document 2, the future action of the vehicle is predicted using a neural network previously trained to output vehicle information in the future from past image information and past vehicle information.
By the way, a future action pattern of a vehicle often varies depending on not only a state of the vehicle and a state therearound but also a state of a driver of the vehicle. However, a vehicle action prediction device as disclosed in Patent Document 2 associates past image information, vehicle information, and the like with the action pattern of the vehicle, but does not reflect the state of the driver. Thus, the vehicle action prediction device disclosed in Patent Document 2 may not be able to predict future action of the vehicle with sufficient precision. Further, the vehicle action prediction device disclosed in Patent Document 2 does not reflect the state of the driver, and therefore the action pattern cannot be sufficiently narrowed down, which causes the possibility of a large calculation load being applied.
The present invention is directed to providing a moving body prediction device capable of predicting future action of a moving body while reflecting a state of a driver of the moving body, a learning method, a traffic safety support system, and a storage medium.
(1) A moving body prediction device according to the present invention predicting, when a moving body moving in a traffic area is defined as a prediction target, a future of the prediction target in the traffic area, includes a movement state information acquirer configured to acquire movement state information regarding a movement state of the prediction target, a surrounding state information acquirer configured to acquire surrounding state information regarding movement states of traffic participants around the prediction target in the traffic area, a traffic environment information acquirer configured to acquire traffic environment information around the prediction target in the traffic area, a driver state information acquirer configured to acquire driver state information regarding a state of a driver of the prediction target, a traffic scene specifier configured to specify a traffic scene of the prediction target on a basis of the movement state information, the surrounding state information, and the traffic environment information, an action pattern selector configured to select, as a predicted action pattern, at least one from among a plurality of predetermined action patterns on the basis of the traffic scene and the driver state information, and an action predictor configured to predict future action of the prediction target on the basis of the predicted action pattern.
(2) In this case, the driver state information acquirer preferably acquires, as the driver state information, information correlated with driving capability of the driver.
(3) In this case, the driver state information acquirer preferably generates the driver state information on the basis of biological information of the driver detected by a biological information sensor provided in the prediction target.
(4) In this case, the action pattern selector preferably selects the predicted action pattern by using an action pattern prediction model that outputs at least one of the action patterns when the traffic scene and the driver state information are input. A learning method for the moving body prediction device according to the present invention includes the steps of generating input data to the action pattern prediction model on a basis of the traffic scene and the driver state information acquired in a predetermined first time period, generating correct answer data to an output of the action pattern prediction model on the basis of the movement state information acquired in a second time period immediately after the first time period, and learning the action pattern prediction model using learning data obtained by combining the input data and the correct answer data.
(5) In this case, the moving body prediction device preferably further includes a collision predictor configured to predict whether or not a collision will occur in a future of the prediction target on a basis of the movement state information, the surrounding state information, the traffic environment information, and a prediction result of the action predictor.
(6) A traffic safety support system according to the present invention includes on-board devices that move along with the prediction target, and a traffic management server capable of communicating with the on-board devices, the traffic management server includes the moving body prediction device and a support information notifier configured to transmit, to the on-board devices, support information including information regarding a prediction result of the collision predictor in a case where it is predicted by the collision predictor that the prediction target will collide, and the on-board devices include an on-board notification device configured to notify the driver of the information generated on the basis of the support information by at least one selected from an image and sound.
(7) In this case, the moving body prediction device preferably further includes a support action information generator configured to generate support action information regarding action for avoiding a collision or action for reducing damage due to the collision on the basis of the movement state information and the surrounding state information in a case where it is predicted by the collision predictor that the prediction target will collide, the support information notifier preferably transmits, to the on-board devices, the information regarding the prediction result and the support information including the support action information, and the on-board notification device preferably notifies the driver of information, by at least one selected from an image and sound, generated so that the driver performs a driving operation according to the support action information.
(8) A traffic safety support system according to the present invention includes on-board devices that move along with the prediction target, and a traffic management server capable of communicating with the on-board devices, the traffic management server includes the moving body prediction device and a support information notifier configured to transmit, to the on-board devices, support information including information regarding a prediction result of the collision predictor in a case where it is predicted by the collision predictor that the prediction target will collide, and the on-board devices include an on-board driving support device configured to automatically control behavior of the prediction target on the basis of the support information.
(9) In this case, the moving body prediction device preferably further includes a support action information generator configured to generate support action information regarding action for avoiding a collision or action for reducing damage due to the collision on the basis of the movement state information and the surrounding state information in a case where it is predicted by the collision predictor that the prediction target will collide, the support information notifier preferably transmits, to the on-board devices, the information regarding the prediction result and the support information including the support action information, and the on-board driving support device preferably automatically controls behavior of the prediction target on the basis of the support action information.
(1) A moving body prediction device includes a movement state information acquirer configured to acquire movement state information regarding a movement state of a prediction target which is a moving body, a surrounding state information acquirer configured to acquire surrounding state information regarding movement states of traffic participants around the prediction target, a traffic environment information acquirer configured to acquire traffic environment information around the prediction target, and a driver state information acquirer configured to acquire driver state information regarding a state of a driver of the prediction target. Further, the moving body prediction device includes a traffic scene specifier configured to specify a traffic scene of the prediction target on a basis of the movement state information, the surrounding state information, and the traffic environment information, an action pattern selector configured to select, as a predicted action pattern, at least one from among a plurality of predetermined action patterns on the basis of the traffic scene and the driver state information, and an action predictor configured to predict future action of the prediction target on the basis of the predicted action pattern. Thus, according to the present invention, the predicted action pattern can be efficiently narrowed down from among the plurality of action patterns while reflecting the state of the driver, by using the traffic scene in which the prediction target is placed and the driver state information of the driver of the prediction target. Thus, according to the present invention, the future action of the prediction target can be predicted with reduced calculation load, while reflecting the state of the driver. Thus, according to the present invention, it is possible to predict, with high precision, the future action of the prediction target by reflecting the state of the driver.
(2) The driver state information acquirer acquires, as the driver state information, information correlated with driving capability of the driver. Thus, according to the present invention, the predicted action pattern can be narrowed down from among the plurality of action patterns while reflecting the driving capability of the driver at that time, which makes it possible to predict the future action of the prediction target with high precision.
(3) The driver state information acquirer generates the driver state information on the basis of biological information of the driver detected by a biological information sensor provided in the prediction target. Thus, according to the present invention, the predicted action pattern can be narrowed down from among the plurality of action patterns while reflecting the biological information of the driver at that time, which makes it possible to predict the future action of the prediction target with high precision.
(4) The action pattern selector selects the predicted action pattern by using an action pattern prediction model that outputs at least one of the action patterns when the traffic scene and the driver state information are input. Further, a learning method of the moving body prediction device according to the present invention generates input data to the action pattern prediction model on the basis of the traffic scene and the driver state information acquired in a predetermined first time period, generates correct answer data to an output of the action pattern prediction model on the basis of the movement state information acquired in a second time period immediately after the first time period, and learns the action pattern prediction model using learning data obtained by combining the input data and the correct answer data. Thus, according to the present invention, characteristics of the driver of the prediction target can be reflected in the action pattern prediction model, which makes it possible to improve the prediction precision by the moving body prediction device.
(5) The moving body prediction device further includes a collision predictor configured to predict whether or not a collision will occur in the future of the prediction target on the basis of the movement state information, the surrounding state information, the traffic environment information, and the prediction result of the action predictor. Thus, according to the present invention, it is possible to predict whether or not a collision will occur in the future of the prediction target while reflecting the state of the driver at that time.
(6) A traffic safety support system includes on-board devices that move along with the prediction target in the traffic area, and a traffic management server capable of communicating with the on-board devices and including the above-described moving body prediction device. Thus, according to the present invention, the movement state information of the prediction target, the surrounding state information, the traffic environment information, and the driver state information can be collected by the traffic management server, which makes it possible to predict, with high precision, whether or not a collision will occur in the future of the prediction target. Further, in a case where it is predicted that the prediction target will collide, the support information notifier of the traffic management server transmits, to the on-board devices, the support information including the information regarding the prediction result, and the on-board notification device of the on-board devices notifies the driver of the information generated on the basis of the support information by at least one selected from the image and sound. Thus, according to the present invention, the driver of the prediction target having received the notification can make the action for avoiding the predicted collision or reducing the damage due to the collision.
(7) In a case where it is predicted by the collision predictor that the prediction target will collide, the traffic management server generates support action information regarding action for avoiding collision or action for reducing damage due to collision, and transmits, to the on-board devices, the support information including the support action information. Further, the on-board notification device notifies the driver of the information, by at least one selected from the image and sound, generated so that the driver performs the driving operation according to the support action information. Thus, according to the present invention, the driver of the prediction target having received the notification performs the driving operation according to the notification, which makes it possible to avoid the predicted collision or reduce the damage due to the collision.
(8) According to the present invention, by the same reasons as the invention disclosed in (6), the movement state information of the prediction target, the surrounding state information, the traffic environment information, the driver state information, and the like can be collected by the traffic management server, which makes it possible to predict, with high precision, whether or not a collision will occur in the future of the prediction target. Further, in a case where it is predicted that the prediction target will collide, the support information notifier of the traffic management server transmits, to the on-board devices, the support information including the information regarding the prediction result, and the on-board driving support device of the on-board devices automatically controls behavior of the prediction target on the basis of the support information. Thus, according to the present invention, the traffic management server can automatically control the behavior of the prediction target through the notification, which makes it possible to automatically avoid the predicted collision or reduce the damage due to the collision.
(9) In a case where it is predicted by the collision predictor that the prediction target will collide, the traffic management server generates support action information regarding action for avoiding collision or action for reducing damage due to collision, and transmits, to the on-board devices, the support information including the support action information. Further, the on-board driving support device automatically controls the behavior of the prediction target on the basis of the support action information. Thus, according to the present invention, the traffic management server can automatically avoid the predicted collision or reduce the damage due to the collision while reflecting the movement state of the prediction target and the state of the surroundings of the prediction target.
A traffic safety support system according to one embodiment of the present invention will be described below with reference to the drawings.
The traffic safety support system 1 supports safe and smooth traffic of traffic participants in the target traffic area 9 by recognizing, as individual traffic participants, pedestrians 4 who are persons moving in the target traffic area 9 and four-wheeled vehicles 2, motorcycles 3, and the like that are moving bodies moving in the target traffic area 9, and notifying each traffic participant of support information generated through the recognition to encourage communication (specifically, for example, reciprocal recognition between the traffic participants) between the traffic participants that move on the basis of intentions of the traffic participants and recognition of a surrounding traffic environment and to automatically control behavior of the moving bodies.
The traffic safety support system 1 includes on-board devices 20 (including on-board devices mounted on individual four-wheeled vehicles 2 and mobile information processing terminals possessed or worn by drivers who drive the individual four-wheeled vehicles 2) that move along with the individual four-wheeled vehicles 2, on-board devices 30 (including on-board devices mounted on individual motorcycles 3 and mobile information processing terminals possessed or worn by drivers who drive the individual motorcycles 3) that move along with the individual motorcycles 3, mobile information processing terminals 40 possessed or worn by the respective pedestrians 4, a plurality of the infrastructure cameras 56 provided in the target traffic area 9, a traffic light control device 55 that controls the traffic lights 54, and a traffic management server 6 connected to a plurality of terminals (hereinafter, also simply referred to as “area terminals”) such as these on-board devices 20 and 30, the mobile information processing terminals 40, the infrastructure cameras 56 and the traffic light control device 55 existing in the target traffic area 9 so as to be able to perform communication.
The traffic management server 6 includes one or more computers connected to the above-described plurality of area terminals via base stations 57 so as to be able to perform communication. More specifically, the traffic management server 6 includes a server connected to the plurality of area terminals via the base stations 57, a network core and the Internet, an edge server connected to the plurality of area terminals via the base stations 57 and an MEC (multi-access edge computing) core, and the like.
The on-board devices 20 mounted on the four-wheeled vehicle 2 in the target traffic area 9 include, for example, an on-board driving support device 21 that supports driving by a driver, an on-board notification device 22 that notifies the driver of various kinds of information, a driving subject state sensor 23 that detects a state of the driver who is driving, an on-board communication device 24 that performs wireless communication between the own vehicle, and the traffic management server 6 and other vehicles near the own vehicle, and the like.
The on-board driving support device 21 includes an external sensor unit, an own vehicle state sensor, a navigation device, a driving support ECU, and the like. The external sensor includes an exterior camera unit that captures an image around the own vehicle, a plurality of on-board external sensors mounted on the own vehicle, such as a radar unit and a LIDAR (light detection and ranging) unit that detects a target outside the vehicle using an electromagnetic wave, and an outside recognition device that acquires information regarding a state around the own vehicle by performing sensor fusion processing on detection results by these on-board external sensors. The own vehicle state sensor includes a sensor that acquires information regarding a traveling state of the own vehicle, such as a vehicle speed sensor, an acceleration sensor, a steering angle sensor, a yaw rate sensor, a position sensor and an orientation sensor. The navigation device includes, for example, a GNSS (global navigation satellite system) receiver that specifies a current position of the own vehicle on the basis of a signal received from a GNSS satellite, a storage device that stores map information, and the like.
The driving support ECU executes driving support control that automatically controls behavior of the vehicle, such as lane departure prevention control, lane change control, preceding vehicle following control, erroneous start prevention control, collision mitigation brake control, and collision avoidance control on the basis of the information acquired by an on-board sensing device such as the external sensor unit, the own vehicle state sensor, and the navigation device and coordination support information transmitted from the traffic management server 6. Further, the driving support ECU generates driving support information for supporting safe driving by the driver on the basis of the information acquired by the external sensor unit, the own vehicle state sensor, the navigation device, and the like, and transmits the driving support information to the on-board notification device 22.
The driving subject state sensor 23 includes various devices that acquire time-series data of information correlated with driving capability of the driver who is driving. The driving subject state sensor 23 includes, for example, an on-board camera that acquires face image data of the driver who is driving, a biological information sensor that acquires biological information of the driver who is driving, and the like. Here, the biological information sensor more specifically includes a seat belt sensor that is provided at a seat belt to be fastened by the driver and detects a pulse of the driver, whether or not the driver breathes, and the like, a steering sensor that is provided at a steering to be gripped by the driver and detects a skin potential of the driver, and a wearable terminal that detects a heart rate, a blood pressure, a degree of saturation of oxygen in blood, and the like.
The on-board communication device 24 has a function of transmitting, to the traffic management server 6, the information acquired by the driving support ECU (including the information acquired by the external sensor unit, the own vehicle state sensor, the navigation device, and the like, control information regarding driving support control that is being executed, and the like), the information regarding the driving subject acquired by the driving subject state sensor 23 (the face image data and the biological information of the driver), and the like, and a function of receiving coordination support information transmitted from the traffic management server 6 and transmitting the received coordination support information to the on-board driving support device 21 and the on-board notification device 22.
The on-board notification device 22 includes various devices that notify the driver of various kinds of information through auditory sense, visual sense, haptic sense, and the like, by causing a human machine interface (hereinafter, may be abbreviated as an “HMI”) to operate in an aspect determined on the basis of the driving support information transmitted from the on-board driving support device 21 and the coordination support information transmitted from the traffic management server 6.
The on-board devices 30 mounted on the motorcycle 3 in the target traffic area 9 include, for example, an on-board driving support device 31 that supports driving by a rider, an on-board notification device 32 that notifies the rider of various kinds of information, a rider state sensor 33 that detects a state of the rider who is driving, an on-board communication device 34 that performs wireless communication between the own vehicle, and the traffic management server 6 and other vehicles near the own vehicle, and the like.
The on-board driving support device 31 includes an external sensor unit, an own vehicle state sensor, a navigation device, a driving support ECU, and the like. The external sensor unit includes an exterior camera unit that captures an image around the own vehicle, a plurality of on-board external sensors mounted on the own vehicle such as a radar unit and a LIDAR unit that detects a target outside the vehicle by using an electromagnetic wave, and an outside recognition device that acquires information regarding a state around the own vehicle by performing sensor fusion processing on detection results by the on-board exterior sensors. The own vehicle state sensor includes sensors that acquire information regarding a traveling state of the own vehicle such as a vehicle speed sensor and a five-axis or six-axis inertial measurement device. The navigation device includes, for example, a GNSS receiver that specifies a current position on the basis of a signal received from a GNSS satellite, a storage device that stores map information, and the like.
The driving support ECU executes driving support control that automatically controls behavior of the vehicle, such as lane keeping control, lane departure prevention control, lane change control, preceding vehicle following control, erroneous start prevention control, and collision mitigation brake control on the basis of the information acquired by an on-board sensing device such as the external sensor unit, the own vehicle state sensor, and the navigation device and coordination support information transmitted from the traffic management server 6. Further, the driving support ECU generates driving support information for supporting safe driving by the rider on the basis of the information acquired by the external sensor unit, the own vehicle state sensor, the navigation device, and the like, and transmits the driving support information to the on-board notification device 32.
The rider state sensor 33 includes various devices that acquire information correlated with driving capability of the rider who is driving. The rider state sensor 33 includes, for example, an on-board camera that acquires face image data of the rider who is driving, a biological information sensor that acquires biological information of the rider who is driving, and the like. Here, the biological information sensor more specifically includes a seat sensor that is provided at a seat to be seated by the rider and detects a pulse of the rider, whether or not the rider breathes, and the like, a helmet sensor that is provided at a helmet to be worn by the rider and detects a pulse of the rider, whether or not the rider breathes, a skin potential of the rider, and the like, and a wearable terminal that detects a heart rate, a blood pressure, a degree of saturation of oxygen in blood, and the like.
The on-board communication device 34 has a function of transmitting, to the traffic management server 6, the information acquired by the driving support ECU (including the information acquired by the external sensor unit, the own vehicle state sensor, the navigation device, and the like, control information regarding driving support control that is being executed, and the like), the information regarding the rider acquired by the rider state sensor 33 (the face image data and the biological information of the rider), and the like, and a function of receiving coordination support information transmitted from the traffic management server 6 and transmitting the received coordination support information to the on-board driving support device 31 and the on-board notification device 32.
The on-board notification device 32 includes various devices that notify the rider of various kinds of information through auditory sense, visual sense, haptic sense, and the like of the rider, by causing the HMI to operate in an aspect determined on the basis of the driving support information transmitted from the on-board driving support device 21 and the coordination support information transmitted from the traffic management server 6.
The mobile information processing terminal 40 possessed or worn by the pedestrian 4 in the target traffic area 9 includes, for example, a wearable terminal to be worn by the pedestrian 4, a smartphone possessed by the pedestrian 4, and the like. The wearable terminal has a function of measuring biological information of the pedestrian 4 such as a heart rate, a blood pressure and a degree of saturation of oxygen in blood and transmitting the measurement data of the biological information to the traffic management server 6, transmitting pedestrian information regarding the pedestrian 4 such as position information, travel acceleration, and schedule information of the pedestrian 4, and receiving the coordination support information transmitted from the traffic management server 6.
Further, the mobile information processing terminal 40 includes a notifier 42 that notifies the pedestrian of various kinds of information through auditory sense, visual sense, haptic sense, and the like, of the pedestrian by causing the HMI to operate in an aspect determined on the basis of the received coordination support information.
The infrastructure camera 56 captures images of traffic infrastructure equipment including a road, an intersection and a pavement in a target traffic area and moving bodies and pedestrians that move on the road, the intersection, the pavement, and the like, and transmits the obtained image information to the traffic management server 6.
The traffic light control device 55 controls the traffic lights and transmits traffic light state information regarding current lighting color of the traffic lights provided in the target traffic area, a timing at which the lighting color is switched, and the like, to the traffic management server 6.
The traffic management server 6 is a computer that supports safe and smooth traffic of traffic participants in the target traffic area by generating coordination support information for encouraging communication between the traffic participants and recognition of a surrounding traffic environment for each traffic participant to be supported on the basis of the information acquired from a plurality of area terminals existing in the target traffic area as described above and notifying each traffic participant of the coordination support information. Note that in the present embodiment, traffic participants including means for receiving the coordination support information generated at the traffic management server 6 and causing the HMI to operate in an aspect determined on the basis of the received coordination support information (for example, the on-board devices 20 and 30, the mobile information processing terminal 40 and the notifiers 22, 32 and 42) among the plurality of traffic participants existing in the target traffic area are set as targets to be supported by the traffic management server 6.
The traffic management server 6 includes a target traffic area recognizer 60 that recognizes persons and moving bodies in the target traffic area as individual traffic participants, a driving subject information acquirer 61 that acquires driving subject state information correlated with driving capabilities of driving subjects of the moving bodies recognized as the traffic participants by the target traffic area recognizer 60, a predictor 62 that predicts futures of a plurality of traffic participants in the target traffic area, a coordination support information notifier 65 that transmits coordination support information generated for each of the individual traffic participants recognized as support targets by the target traffic area recognizer 60, a traffic environment database 67 in which information regarding a traffic environment of the target traffic area is accumulated, and a driving history database 68 in which information regarding past driving history by the driving subjects registered in advance is accumulated.
In the traffic environment database 67, information regarding traffic environments of the traffic participants in the target traffic area such as map information of the target traffic area registered in advance (for example, a width of the road, the number of lanes, speed limit, a width of the pavement, whether or not there is a guardrail between the road and the pavement, a position of a crosswalk) and risk area information regarding a high risk area with a particularly high risk in the target traffic area, is stored. In the following description, the information stored in the traffic environment database 67 will be also referred to as registered traffic environment information.
In the driving history database 68, information regarding past driving history of the driving subjects registered in advance is stored in association with registration numbers of moving bodies possessed by the driving subjects. Thus, if the registration numbers of the recognized moving bodies can be specified by the target traffic area recognizer 60 which will be described later, the past driving history of the driving subjects of the recognized moving bodies can be acquired by searching the driving history database 68 on the basis of the registration numbers. In the following description, the information stored in the driving history database 68 will also be referred to as registered driving history information.
The target traffic area recognizer 60 recognizes traffic participants that are persons or moving bodies in the target traffic area and recognition targets including traffic environments of the respective traffic participants in the target traffic area on the basis of the information transmitted from the above-described area terminal (the on-board devices 20 and 30, the mobile information processing terminal 40, the infrastructure camera 56 and the traffic light control device 55) in the target traffic area and the registered traffic environment information read from the traffic environment database 67 and acquires recognition information regarding the recognition targets.
Here, the information transmitted from the on-board driving support device 21 and the on-board communication device 24 included in the on-board devices 20 to the target traffic area recognizer 60 and the information transmitted from the on-board driving support device 31 and the on-board communication device 34 included in the on-board devices 30 to the target traffic area recognizer 60 include information regarding traffic participants around the own vehicle and a state regarding the traffic environment acquired by the external sensor unit, information regarding a state of the own vehicle as one traffic participant acquired by the own vehicle state sensor, the navigation device and the like, and the like. Further, the information transmitted from the mobile information processing terminal 40 to the target traffic area recognizer 60 includes information regarding a state of a pedestrian as one traffic participant, such as a position and travel acceleration. Still further, the image information transmitted from the infrastructure camera 56 to the target traffic area recognizer 60 includes information regarding the respective traffic participants and traffic environments of the traffic participants, such as appearance of the traffic infrastructure equipment such as the road, the intersection and the pavement, and appearance of traffic participants moving in the target traffic area. Further, the traffic light state information transmitted from the traffic light control device 55 to the target traffic area recognizer 60 includes information regarding traffic environments of the respective traffic participants such as current lighting color of the traffic lights and a timing for switching the lighting color. Further, the registered traffic environment information to be read by the target traffic area recognizer 60 from the traffic environment database 67 includes information regarding traffic environments of the respective traffic participants such as map information, the risk area information, and the like, of the target traffic area.
Thus, the target traffic area recognizer 60 can acquire recognition information of each traffic participant (hereinafter, also referred to as “traffic participant recognition information”) such as a position of each traffic participant in the target traffic area, a moving vector (that is, a vector extending along a moving direction and having a length proportional to moving speed), travel acceleration, a vehicle type of the moving body, a vehicle rank, registration number of the moving body, the number of people of the pedestrian and an age group of the pedestrian on the basis of the information transmitted from the area terminals. Further, the target traffic area recognizer 60 can acquire recognition information of the traffic environment (hereinafter, also referred to as “traffic environment recognition information”) of each traffic participant in the target traffic area such as a width of the road, the number of lanes, speed limit, a width of the pavement, whether or not there is a guardrail between the road and the pavement, lighting color of the traffic light, a switching timing of the lighting color, and the risk area information on the basis of the information transmitted from the area terminals.
The target traffic area recognizer 60 transmits the traffic participant recognition information and the traffic environment recognition information acquired as described above to the driving subject information acquirer 61, the predictor 62, the coordination support information notifier 65, and the like.
The driving subject information acquirer 61 acquires driving subject state information and driving subject characteristic information correlated with current driving capabilities of the driving subjects of the moving bodies recognized as the traffic participants by the target traffic area recognizer 60 on the basis of the information transmitted from the above-described area terminals (particularly, the on-board devices 20 and 30) in the target traffic area and the registered driving history information read from the driving history database 68.
More specifically, in a case where the driving subject of the four-wheeled vehicle recognized as the traffic participant by the target traffic area recognizer 60 is a person, the driving subject information acquirer 61 acquires the information transmitted from the on-board devices 20 mounted on the four-wheeled vehicle as driving subject state information of the driver. Further, in a case where the driving subject of the motorcycle recognized as the traffic participant by the target traffic area recognizer 60 is a person, the driving subject information acquirer 61 acquires the information transmitted from the on-board devices 30 mounted on the motorcycle as driving subject state information of the rider.
Here, the information to be transmitted from the driving subject state sensor 23 and the on-board communication device 24 included in the on-board devices 20 to the driving subject information acquirer 61 includes face image data of the driver who is driving, and time-series data such as biological information of the driver who is driving, which is correlated with driving capability of the driver who is driving. Further, the information to be transmitted from the rider state sensor 33 and the on-board communication device 34 included in the on-board devices 30 to the driving subject information acquirer 61 includes face image data of the rider who is driving, and time-series data such as biological information of the rider who is driving, which is correlated with driving capability of the rider who is driving. Further, the information to be transmitted from the mobile information processing terminals 25 and 35 included in the on-board devices 20 and 30 to the driving subject information acquirer 61 includes personal schedule information of the driver and the rider. In a case where the driver and the rider drive the moving bodies, for example, under tight schedule, there is a case where the driver and the rider may feel pressed, and driving capabilities may degrade. Thus, it can be said that the personal schedule information of the driver and the rider is information correlated with the driving capabilities of the driver and the rider.
The driving subject information acquirer 61 acquires driving subject characteristic information regarding characteristics (such as, for example, too many times of sudden lane change and too many times of sudden acceleration and deceleration) regarding driving of the driving subject correlated with current driving capability of the driving body who is driving by using both or one of the driving subject state information for the driving subject acquired through the following procedure and the registered driving history information read from the driving history database 68.
The driving subject information acquirer 61 transmits the driving subject state information and the driving subject characteristic information of the driving subject acquired as described above to the predictor 62, the coordination support information notifier 65 and the like.
The predictor 62 extracts part of the traffic area in the target traffic area as a monitoring area and predicts risks in the future of prediction target determined among a plurality of traffic participants in the monitoring area on the basis of the traffic participant recognition information and the traffic environment recognition information acquired by the target traffic area recognizer 60 and the driving subject state information and the driving subject characteristic information acquired by the driving subject information acquirer 61.
Here, the target traffic area is a traffic area of a relatively broad range determined, for example, in municipal units. In contrast, the monitoring area is a traffic area such as, for example, an area near an intersection and a specific facility, through which a four-wheeled vehicle can pass in an approximately few tens of seconds in a case where the four-wheeled vehicle travels at legal speed.
The movement state information acquirer 620 determines, as a prediction target, one person among the plurality of traffic participants existing in the monitoring area on the basis of the traffic participant recognition information transmitted from the target traffic area recognizer 60, and acquires movement state information regarding a movement state of the prediction target. More specifically, the movement state information acquirer 620 extracts information regarding a movement state of the prediction target from the traffic participant recognition information acquired by the target traffic area recognizer 60, and acquires the extracted information as the movement state information. Here, the movement state information includes, for example, a plurality of parameters that characterize a movement state of the prediction target such as a position of the prediction target, a moving vector, travel acceleration, a vehicle type, and a vehicle rank.
The surrounding state information acquirer 621 specifies a plurality of traffic participants existing around the prediction target in the monitoring area on the basis of the traffic participant recognition information transmitted from the target traffic area recognizer 60, and acquires surrounding state information regarding movement states of the plurality of traffic participants existing around the prediction target. More specifically, the surrounding state information acquirer 621 extracts information regarding movement states of the plurality of traffic participants existing around the prediction target from the traffic participant recognition information acquired by the target traffic area recognizer 60, and acquires the extracted information as the surrounding state information. Here, the surrounding state information includes, for example, a plurality of parameters that characterize a movement state of each traffic participant such as a position, a moving vector, travel acceleration, a vehicle type, and a vehicle rank of each traffic participant existing around the prediction target.
The traffic environment information acquirer 622 acquires traffic environment information of the surroundings of the prediction target in the monitoring area on the basis of the traffic environment recognition information transmitted from the target traffic area recognizer 60 and the registered traffic environment information stored in the traffic environment database 67. More specifically, the traffic environment information acquirer 622 extracts information regarding a surrounding traffic environment for the monitoring area or the prediction target from the traffic environment recognition information acquired by the target traffic area recognizer 60 and the registered traffic environment information stored in the traffic environment database 67, and acquires the extracted information as the traffic environment information. Here, the traffic environment information includes, for example, a plurality of parameters that characterize a surrounding traffic environment for the prediction target such as a width of the road, the number of lanes, speed limit, a width of the pavement, whether or not there is a guardrail between the road and the pavement, lighting color of the traffic light, a switching timing of the lighting color, and the risk area information.
The driver state information acquirer 623 acquires driver state information regarding a state of a driver of the prediction target on the basis of the driving subject state information transmitted from the driving subject information acquirer 61. Here, the driver state information refers to information correlated with driving capability of the driver at that time, and more specifically refers to information in which an emotion state and a physical condition of the driver at that time are reflected. While in the present embodiment, a case will be described where an impatience parameter value representing quantification of an impatience degree of the driver is defined as the driver state information, the present invention is not limited to this. Further, while in the present embodiment, a case will be described where the impatience parameter value may take any one of three values including a value 0 indicating that the driver is in the normal state, a value 1 indicating that the driver is in a slightly impatient state, and a value 2 indicating that the driver is in a strongly impatient state, the present invention is not limited to this.
The driving subject information acquired by the driving subject information acquirer 61 as described above includes face image data of the driver of the prediction target, time-series data such as biological information of the driver of the prediction target, and schedule information of the driver. Thus, the driver state information acquirer 623 calculates the impatience parameter value indicating the impatience degree of the driver at that time on the basis of the face image data, the time-series data such as biological information, schedule information and the like.
The traffic scene specifier 624 specifies a traffic scene of the prediction target in the monitoring area on the basis of the movement state information, the surrounding state information, and the traffic environment information. More specifically, the traffic scene specifier 624 specifies a traffic scene of the prediction target by determining values of a plurality of traffic scene parameters characterizing a traffic scene of the prediction target on the basis of the movement state information, the surrounding state information, and the traffic environment information.
Here, examples of the traffic scene parameters include the number of lanes of a traveling road, types of the lanes, widths of the lanes, a position of the lane on which the prediction target exists, the legal speed limit of the traveling road, a speed range of the prediction target, whether or not a preceding vehicle of the prediction target is traveling, a speed range of the preceding vehicle, a distance between the preceding vehicle and the prediction target, a vehicle rank of the preceding vehicle, whether or not a following vehicle of the prediction target is traveling, a speed range of the following vehicle, a distance between the following vehicle and the prediction target, a vehicle rank of the following vehicle, whether or not a right-side parallel traveling vehicle exists on the right side of the prediction target, a speed range of the right-side parallel traveling vehicle, a distance between the right-side parallel traveling vehicle and the prediction target, a vehicle rank of the right-side parallel traveling vehicle, whether or not a left-side parallel traveling vehicle exists on the left side of the prediction target, a speed range of the left-side parallel traveling vehicle, a distance between the left-side parallel traveling vehicle and the prediction target, a vehicle rank of the left-side parallel traveling vehicle, whether or not a traffic light exists in front of the prediction target, a color of the traffic light, and a distance to the traffic light.
The action pattern selector 625 selects, as a predicted action pattern, at least one from among a plurality of predetermined action patterns on the basis of the driver state information acquired by the driver state information acquirer 623 and the traffic scene specified by the traffic scene specifier 624. Here, the action pattern selector 625 previously determines, as a plurality of action patterns, constant speed action of maintaining the current speed, a deceleration action of decreasing the speed than the current speed, stop action of stopping the prediction target, acceleration action of increasing the speed than the current speed, preceding vehicle following action of following the preceding vehicle, left-side parallel traveling vehicle following action of following the left-side parallel traveling vehicle, right-side parallel traveling vehicle following action of following the right-side parallel traveling vehicle, right-side lane change action of changing the travel lane to the right lane, left-side lane change action of changing the travel lane to the left lane, right-side cutting-in action of cutting-in between the preceding vehicle and the right-side parallel traveling vehicle, left-side cutting-in action of cutting-in between the preceding vehicle and the left-side parallel traveling vehicle, right-side overtaking action of overtaking the preceding vehicle from the right side, left-side overtaking action of overtaking the preceding vehicle from the left side, and a combination action of these actions, and the like.
The action pattern selector 625 selects at least one as predicted action pattern from among the plurality of action patterns by using an action pattern prediction model that outputs at least one action pattern selected from the above-described plurality of action patterns when input data generated on the basis of the driver state information and traffic scene with respect to the prediction target is input, for example. The action pattern prediction model associates the driver state information and traffic scene with respect to the prediction target with the predicted action pattern believed to be more likely to be taken by this prediction target in the near future. In other words, the action pattern selector 625 sets, as the predicted action pattern, an output of the action pattern prediction model when the input data generated on the basis of the driver state information and the traffic scene is input to the action pattern prediction model. Here, the action pattern selector 625 uses, as the action pattern prediction model, a deep neural network (DNN) constructed for each prediction target by machine learning using the data acquired from the prediction target.
In such an action pattern prediction model, a DNN constructed by repeatedly performing a learning method described below for each prediction target is used. The learning method includes the steps of generating input data to the action pattern prediction model on the basis of the traffic scene and the driver state information acquired in a predetermined first time period, generating correct answer data to an output of the action pattern prediction model on the basis of the movement state information acquired in a second time period immediately after the first time period, and learning the action pattern prediction model using learning data obtained by combining the input data and the correct answer data.
As described above, in the present embodiment, a case has been described where the action pattern selector 625 selects a predicted action pattern by using the action pattern prediction model, but the present invention is not limited to this. The action pattern selector 625 may select at least one predicted action pattern prediction model from among the plurality of action patterns by using a table that associates the driver state information and the traffic scene with respect to the prediction target with the predicted action pattern believed to be more likely to be taken by this prediction target in the near future.
Here, specific procedure for selecting a predicted action pattern from among the plurality of action patterns by the action pattern selector 625 will be described with reference to
As illustrated in
In the action pattern selector 625, a plurality of action patterns that may be taken by the first traffic participant 81 as the prediction target during the period from the state illustrated in
The action pattern selector 625 selects, as a predicted action pattern, at least one from among a plurality of predetermined action patterns as illustrated in
Returning to
The collision predictor 627 predicts whether or not a collision will occur in the future of the prediction target on the basis of the movement state information, the surrounding state information, the traffic environment information, and the prediction result of the action predictor 626. More specifically, the collision predictor 627 generates a risk map that associates the moving speed of the prediction target with a collision risk value in the future of the prediction target on the basis of the movement state information, the surrounding state information, the traffic environment information, and the predicted traveling path of the prediction target predicted by the action predictor 626. Further, the collision predictor 627 predicts whether or not a collision will occur in the future of the prediction target by searching the risk map on the basis of the predicted moving speed profile of the prediction target predicted by the action predictor 626. Here, a specific procedure for generating the risk map in the collision predictor 627 will be described with reference to
First, the collision predictor 627 calculates predicted traveling paths 91a, 92a, and 93a of the respective traffic participants 91, 92, and 93 as indicated by dashed-line arrows in
Next, the collision predictor 627 generates a risk map for the prediction target in a case where the traffic participants 91 to 93 are assumed to travel along the respective predicted traveling paths 91a to 93a up to the predicted period ahead, on the basis of the movement state information, the surrounding state information and the traffic environment information.
As illustrated in
Thus, according to the risk map illustrated in
Returning to
More specifically, for example, in
In a case where it is predicted by the collision predictor 627 that the prediction target is highly likely to collide, the support action information generator 628 generates support action information regarding action for avoiding collision of the prediction target or action for reducing damage due to collision on the basis of the movement state information, the surrounding state information, and the traffic environment information. The support action information generator 628 generates the support action information on the basis of the risk map generated by the collision predictor 627, for example.
More specifically, the support action information generator 628 generates, as the support action information, a moving speed profile from the current time point to the predetermined period ahead so that an evaluation value shown in the following equation (1) becomes maximum. In the following equation (1), a “maximum risk value” is a maximum valve of the collision risk valve calculated by searching the risk map on the basis of the moving speed profile. In the following equation (1), a “moving period” is a period required to transition from the current time point to the constant speed in the moving speed profile. Further, in the following equation (1), “acceleration or deceleration” is an absolute value of the acceleration of the prediction target until the speed of the prediction target transitions from the current time point to the constant speed in the moving speed profile. Further, in the following equation (1), “a” and “b” each are positive coefficients.
Evaluation value=1/(Maximum risk value+a×Moving period+b×Acceleration or deceleration) (1)
As shown in the above-described equation (1), the evaluation value increases as the collision risk value is reduced, increases as the acceleration or deceleration decreases, and increases as the moving period is reduced. Thus, the support action information generator 628 generates, as the support action information, the moving speed profile that maximizes the evaluation value shown in the above-described equation (1), which makes it possible to generate the support action information so that both of the collision risk value and the acceleration or deceleration of the prediction target are reduced and the moving period required for a transition to the constant speed is reduced. More specifically, the support action information generator 628 generates, as the support action information, a moving speed profile indicated by the broken line 99c under the risk map illustrated in
In a case where it is predicted by the collision predictor 627 of the predictor 62 that the prediction target is highly likely to collide, the coordination support information notifier 65 generates information regarding the prediction result of the collision predictor 627 (that is, the collision risk value, the risk map, and the like), and coordination support information including the support action information generated by the support action information generator 628, and the like, and transmits the generated information to the on-board devices 20, 30 moving along with the traffic participant determined as the prediction target.
As described above, the on-board devices 20, 30 include on-board notification device 22, 32 causing the HMI to operate in an aspect determined on the basis of the coordination support information transmitted from the coordination support information notifier 65. Thus, the on-board notification device 22, 32 having received the coordination support information notifies the driver of the information regarding the prediction result of the collision predictor 627 by at least one selected from the image and sound, which can cause the driver to recognize the existence of the predicted risks. Further, the on-board notification device 22, 32 having received the coordination support information notifies the driver of the information generated to perform a driving operation according to the support action information by at least one selected from the image and sound, which can encourage the driver to perform the driving operation for avoiding the predicted collision or reducing the damage.
As shown in
Returning to
First, in step ST1, the traffic management server 6 determines the monitoring area among the target traffic area, and the processing transitions to step ST2. In step ST2, the traffic management server 6 recognizes a plurality of traffic participants existing in the monitoring area and further determines a prediction target among the plurality of traffic participants, and the processing transitions to step ST3.
In step ST3, the traffic management server 6 acquires movement state information of the prediction target, surrounding state information of the traffic participants around the prediction target in the monitoring area, and traffic environment information around the prediction target in the monitoring area, and the processing transitions to step ST4. In step ST4, the traffic management server 6 acquires driver state information of a driver of the prediction target, and the processing transitions to step ST5.
In step ST5, the traffic management server 6 specifies a traffic scene in which the prediction target is placed, on the basis of the movement state information, the surrounding state information, and the traffic environment information, and the processing transitions to step ST6. In step ST6, the traffic management server 6 selects, as the predicted action pattern, at least one from among the plurality of predetermined action patterns on the basis of the traffic scene specified in step ST5 and the driver state information of the prediction target.
In step ST7, the traffic management server 6 predicts future action of the prediction target on the basis of the predicted action pattern selected in step ST6, and the processing transitions to step ST8. In step ST8, the traffic management server 6 predicts whether or not a collision will occur in the future of the prediction target on the basis of the movement state information, the surrounding state information, the traffic environment information, and the prediction result of step ST7, and the processing transitions to step ST9.
In step ST9, the traffic management server 6 generates support action information regarding action for avoiding a collision of the prediction target or reducing the damage in a case where it is predicted in step ST8 that the prediction target is highly likely to collide, and the processing transitions to step ST10.
In step ST10, the traffic management server 6 transmits, to the prediction target, the information regarding the prediction results in steps ST7 to ST8 and the coordination support information including the support action information, and the processing returns to step ST1.
While one embodiment of the present invention has been described above, the present invention is not limited to this. Detailed configurations may be changed as appropriate within a scope of gist of the present invention. For example, in the above-described embodiment, a case has been described where the predictor 62 that predicts the future of the prediction target which is a moving body in the monitoring area is provided in the traffic management server 6 connected so as to be able to perform communication with the prediction target, but the present invention is not limited to this. The predictor may include on-board devices 20, 30 moving along with the support target. In this case, although the information amount of the movement state information, the surrounding state information, the traffic environment information and the like that can be acquired by the predictor is smaller than the information amount that can be acquired by the traffic management server, there is an advantage that the delay through the communication is small.