This application is based on and claims the benefit of priority to Korean Patent Application No. 10-2016-0077598, filed on Jun. 21, 2016, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
The present disclosure relates to an apparatus and method for monitoring driver's concentrativeness, and more particularly, to an apparatus and method for monitoring driver's concentrativeness using driver's eye tracing.
Most traffic accidents result from lack of responsiveness to situations due to drivers' carelessness and cognitive load. In contrast, drivers' drowsiness or unconscious state may cause a big accident but the number of cases is relatively small.
In particular, a driving pattern of a driver in a cognitive load state is not easily differentiated from a driving pattern of a concentrative operation state in a simple driving level, but when a driving environment is rapidly changed, a driver in the cognitive load state has a low response speed in response, which may end up with a traffic accident.
The present disclosure has been made to solve the above-mentioned problems occurring in the prior art while advantages achieved by the prior art are maintained intact.
An aspect of the present disclosure provides an apparatus and method for monitoring driver's concentrativeness, capable of tracing driver's eyes, while a vehicle is running, to monitor whether the driver is concentratedly obtaining major information, in order to provide alarm when the driver's concentrativeness is dropped at the time when the major information is changed while the vehicle is running.
According to an exemplary embodiment of the present disclosure, an apparatus for monitoring driver's concentrativeness using eye tracing while a vehicle is running includes: a concentrativeness determiner determining in real time sight regions on which a driver keeps eyes while the vehicle is running and determining a sight concentrativeness value of each pixel corresponding to each sight region having a predetermined size in a front image to generate a concentrativeness map corresponding to the front image; a region of interest (ROI) determiner determining an ROI requiring driver's concentrativeness having a level equal to or higher than a predetermined level, relative to a peripheral comparison region, in the front image; and a concentrated state determiner determining whether the driver is in a concentrated state by comparing the ROI with the concentrativeness map corresponding to the front image.
The concentrativeness determiner may determine a sight concentrativeness value of each pixel that respectively sight concentrativeness values of pixels in each sight region are gradually reduced from a central vision portion positioned at the center of a sight direction of the driver to a peripheral vision portion therearound.
The concentrativeness determiner may configure the concentrativeness map by determining each sight region of each image frame, may generate the concentrativeness map by accumulating the sight concentrativeness value of each pixel during a predetermined number of frames, and may attenuate the sight concentrativeness value of each pixel in every predetermined number of frames.
The concentrativeness determiner may apply each weight value to a sight concentrativeness value of each pixel of a previous frame and a sight concentrativeness value of each pixel of a current frame and add up the sight concentrativeness values to accumulate the sight concentrativeness values.
The concentrativeness determiner may apply the same attenuation rate to the entire pixels in order to attenuate the sight concentrativeness values. The concentrativeness determiner may apply different attenuation rates according to a depth of each pixel on the basis of depth information of each pixel in order to attenuate the sight concentrativeness values.
The concentrated state determiner may determine that the driver is in a concentrated state, in response to a determination that an average of sight concentrativeness values of pixels of the ROI is equal to or greater than a threshold value.
The concentrated state determiner may determine that the driver is in a concentrated state, in response to a determination that sight concentrativeness values of pixels of the ROI are increased over time relative to concentrativeness values of the peripheral regions.
The concentrated state determiner may determine that the driver is in a concentrated state, in response to a determination that a distance between the center of pixels having a concentrativeness value equal to or greater than a threshold value and the center of the ROI is smaller than a predetermined value.
The concentrated state determiner may provide alarm to the driver through an interface, in response to a determination that the concentrated state of the driver is changed to a non-concentrated state.
According to another exemplary embodiment of the present disclosure, a method for monitoring driver's concentrativeness using eye tracing while a vehicle is running includes: determining in real time sight regions on which a driver keeps eyes while the vehicle is running and determining a sight concentrativeness value of each pixel corresponding to each sight region having a predetermined size in a front image to generate a concentrativeness map corresponding to the front image; determining an ROI requiring driver's concentrativeness having a level equal to or higher than a predetermined level, relative to a peripheral comparison region, in the front image; and determining whether the driver is in a concentrated state by comparing the ROI with the concentrativeness map corresponding to the front image.
The generating of the concentrativeness map may include determining the sight concentrativeness value of each pixel that respective sight concentrativeness values of pixels in each sight region are gradually reduced from a central vision portion positioned at the center of a sight direction of the driver to a peripheral vision portion therearound.
The generating of the concentrativeness map may include configuring the concentrativeness map by determining each sight region of each image frame, generating the concentrativeness map by accumulating the sight concentrativeness value of each pixel during a predetermined number of frames, and attenuating the sight concentrativeness value of each pixel in every predetermined number of frames.
Each weight value may be applied to a sight concentrativeness value of each pixel of a previous frame and a sight concentrativeness value of each pixel of a current frame and the sight concentrativeness values may be added up to accumulate the sight concentrativeness values.
The same attenuation rate may be applied to the entire pixels in order to attenuate the sight concentrativeness values.
Different attenuation rates may be applied according to a depth of each pixel on the basis of depth information of each pixel in order to attenuate the sight concentrativeness values.
The determining of whether the driver is in a concentrated state may include determining, in response to a determination that an average of sight concentrativeness values of pixels of the ROI is equal to or greater than a threshold value, that the driver is in a concentrated state.
The determining of whether the driver is in a concentrated state may include determining, in response to a determination that sight concentrativeness values of the pixels of the ROI are increased over time, relative to concentrativeness values of the peripheral regions, that the driver is in a concentrated state.
The determining of whether the driver is in a concentrated state may include determining, in response to a determination that a distance between the center of pixels having a concentrativeness value equal to or greater than a threshold value and the center of the ROI is smaller than a predetermined value, that the driver is in a concentrated state.
The determining of whether the driver is in a concentrated state may include: providing, in response to a determination that the concentrated state of the driver is changed to a non-concentrated state, alarm to the driver through an interface.
The above and other objects, features and advantages of the present disclosure will be more apparent from the following detailed description taken in conjunction with the accompanying drawings.
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In this case, in each drawing, like reference numerals refer to like elements. Also, the detailed descriptions of the relevant known functions and/or configurations are omitted. In the below-disclosed details, descriptions will focus on elements necessary to understand operations according to various embodiments, and the detailed descriptions on elements which unnecessarily obscure the important points of the descriptions will be omitted. Also, in the drawings, some elements may be exaggerated, omitted, or schematically illustrated. The size of each element does not entirely reflect an actual size, and thus, details described herein are not limited by the relative sizes or interval of elements illustrated in each drawing.
Referring to
The components of the apparatus 100 for monitoring driver's concentrativeness using driver's eye tracing according to an exemplary embodiment of the present disclosure that may be mounted in a vehicle may be implemented by hardware such as a semiconductor processor, software such as an application program, or a combination thereof. Also, the controller 110 performing general controlling of various components of the apparatus 100 may be implemented to include a function of one or more of the other components, and a partial function of the controller 110 may be implemented as a separate component as another unit. The memory 111 stores data or configuration information required for an operation of the apparatus 100 for monitoring driver's concentrativeness.
First, an operation of the apparatus 100 for monitoring driver's concentrativeness according to an exemplary embodiment of the present disclosure will be described.
The concentrativeness determiner 120 determines a sight region regarding a front side that a driver keeps eyes in every image frame in real time while a vehicle is running and determines a sight concentrativeness value of each pixel corresponding to each sight region having a predetermined size in a front image to generate a concentrativeness map corresponding to the entire front image. Here, the concentrativeness determiner 120 may determine a sight region regarding a front side on which the driver keeps eyes by analyzing an image regarding the driver captured by and transmitted from an image capture device such as a camera, or the like, mounted in the vehicle. Also, the concentrativeness determiner 120 may search for each sight region (please refer to
The ROI determiner 130 determines an ROI (please refer to
The concentrated state determiner 140 compares a concentrativeness map corresponding to the front image with the ROI to determine a concentrated state (concentration/non-concentration) of the driver. Also, when the concentrated state determiner 140 determines that the concentrated state of the driver is changed to a non-concentrated state, the concentrated state determiner 140 may provide alarm to the driver through the interface 150.
For example, the interface 150 may include a display device supporting a liquid crystal display (LCD), a head-up display (HUD), augmented reality (AR), and the like, may include a notification means that is able to physically contact the driver such as a sound, vibration, illumination, a temperature, and the like, and may include a means supporting vehicle-to-anything (V2X) communication with the exterior. According to circumstances, the interface 150 may also include a control device for controlling a vehicle, such as a brake, a steering system, and the like.
When the concentrated state determiner 140 determines that driver's concentrativeness on an ROI, or the like, is dropped, the concentrated state determiner 140 may provide an alarm service, or the like, to the driver on the basis of the driver's current fatigue state, carelessness event occurrence history, a degree of drowsiness, a change in feeling, or road situation information through V2X communication through the interface 150. For example, through the interface 150, the concentrated state determiner 140 may provide physical contact notification by a means that can physically come into contact with the driver such as a sound, vibration, illumination, and a temperature, may provide a guidance or interactive message indicating the need of a break to a display device, or may provide a service such as braking, steering, and the like, in accordance with a concentrated state of the driver through a control device for controlling the vehicle. The control device for controlling the vehicle may use information from a vehicle sensor, such as information of a distance to a preceding vehicle using a radar sensor, vehicle speed information using a wheel speed sensor/acceleration/deceleration pedal sensor, to control braking or steering.
Hereinafter, an operation of the apparatus 100 for monitoring driver's concentrativeness according to an exemplary embodiment of the present disclosure will be described in more detail with reference to the flow chart of
Referring to
When the concentrativeness determiner 120 determines the sight region on the basis of the central vision and peripheral vision model of the driver, the concentrativeness determiner 120 may generate a concentrativeness map corresponding to the entire front image by determining a sight concentrativeness value of each pixel corresponding to each sight region.
For example, as illustrated in
The concentrativeness determiner 120 accumulates the concentrativeness values regarding each pixel of the concentrativeness map corresponding to the entire front image, while attenuating the same on the basis of the image frame, in operation S130. The concentrativeness determiner 120 may determine a concentrativeness value of each pixel at each stage of sight concentrativeness according to accumulation and an attenuation rate of each stage of sight concentrativeness in each frame.
The concentrativeness determiner 120 may configure the concentrativeness map by determining each sight region of each image frame, and may repeat a process of generating a concentrativeness map by accumulating a sight concentrativeness value of each pixel during a predetermined number of frames (i.e., three frames) and attenuating the sight concentrativeness value of each pixel in every predetermined number of frames (e.g., three frames). Here, the sight concentrativeness values may be accumulated and subsequently attenuated, or attenuated and subsequently accumulated by the concentrativeness determiner 120.
For example, as illustrated in
As illustrated in
The ROI determiner 130 determines an ROI (please refer to
The concentrated state determiner 140 determines a concentrated state (concentration/non-concentration) of the driver by comparing the ROI with the concentrativeness map corresponding to the front image in operation S150. Also, when the concentrated state determiner 140 determines that a concentrated state of the driver is changed to a non-concentrated state, the concentrated state determiner 140 may provide alarm to the driver according to various methods as mentioned above through the interface 150.
For example, when an average of sight concentrativeness values of pixels of the ROI (e.g., 720) is equal to or greater than a threshold value, the concentrated state determiner 140 may determine that the driver is in a concentrated state.
Also, when sight concentrativeness values of the pixels of the ROI (e.g., 720) are increased over time, relative to concentrativeness values of the peripheral regions, the concentrated state determiner 140 may determine that the driver is in a concentrated state.
Also, as illustrated in
The computing system 1000 may include at least one processor 1100, a memory 1300, a user interface input device 1400, a user interface output device 1500, a storage 1600, and a network interface 1700 connected through a bus 1200. The processor 1100 may be a semiconductor device executing processing on command languages stored in a central processing unit (CPU) or the memory 1300 and/or storage 1600. The memory 1300 and the storage 1600 may include various types of volatile or nonvolatile storage mediums. For example, the memory 1300 may include a read only memory (ROM) 1310 and a random access memory (RAM) 1320.
Thus, the steps of the method or algorithm described above in relation to the exemplary embodiments of the present disclosure may be directly implemented by hardware, a software module, or a combination thereof executed by the processor 1100. The software module may reside in a storage medium (i.e., the memory 1300 and/or the storage 1600) such as a RAM memory, a flash memory, a ROM memory, an EPROM memory, an EEPROM memory, a register, a hard disk, a detachable disk, or a CD-ROM. An exemplary storage medium is coupled to the processor 1100, and the processor 1100 may read information from the storage medium and write information into the storage medium. In another embodiment, the storage medium may be integrated with the processor 1100. The processor 1100 and the storage medium may reside in an application specific integrated circuit (ASIC). The ASIC may reside in a user terminal. In another embodiment, the processor 1100 and the storage medium may reside as separate components in a user terminal.
As described above, in the apparatus and method for monitoring driver's concentrativeness according to exemplary embodiments of the present disclosure, by monitoring whether a driver is concentratedly obtaining major information by tracing driver's eyes while the vehicle is running, alarm may be effectively provided to the driver when driver's concentrativeness is dropped at a time when the major information is changed while the vehicle is running.
Hereinabove, although the present disclosure has been described with reference to exemplary embodiments and the accompanying drawings, the present disclosure is not limited thereto, but may be variously modified and altered by those skilled in the art to which the present disclosure pertains without departing from the spirit and scope of the present disclosure claimed in the following claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2016-0077598 | Jun 2016 | KR | national |
Number | Name | Date | Kind |
---|---|---|---|
20050209749 | Ito et al. | Sep 2005 | A1 |
20100156617 | Nakada et al. | Jun 2010 | A1 |
20100305755 | Heracles | Dec 2010 | A1 |
20150339589 | Fisher | Nov 2015 | A1 |
20170001648 | An | Jan 2017 | A1 |
Number | Date | Country |
---|---|---|
2005-267108 | Sep 2005 | JP |
2007-329793 | Dec 2007 | JP |
4625544 | Feb 2011 | JP |
2014-191474 | Oct 2014 | JP |
10-2013-0054830 | May 2013 | KR |
10-2013-0076218 | Jul 2013 | KR |
10-2014-0100629 | Aug 2014 | KR |
Entry |
---|
Korean Office Action issued in Application No. 10-2016-0077598 dated Nov. 21, 2017. |
Number | Date | Country | |
---|---|---|---|
20170364761 A1 | Dec 2017 | US |