The present invention relates to a detection system.
In recent years, in order to prevent drivers from being distracted while driving or falling asleep while driving, there has been disclosed, for example, a face image capturing device that captures an image of a face of a driver using a near-infrared camera and a near-infrared light source, and detects an orientation of the face, an opened or closed degree of an eye, a direction of a line of sight, or the like (see, for example, Japanese Patent Application Laid-open No. 2009-116797).
By the way, even if near-infrared light is emitted toward an entire face of a driver, reflected light from the face may not be uniform because a shadow is generated on a part of the face of the driver due to a sunshade used during daytime when external light (sunlight) enters the vehicle interior, the afternoon sun, or an external illumination during nighttime. Feature points of the face (e.g., a contour of the face, shapes and positions of eyes, a nose, and a mouth, and whether there are eyeglasses) can be detected from a face image obtained in such a situation, but a local object, e.g., an iris of an eye, has a larger difference in luminance value than a local object having a small difference in luminance value in a face area.
Therefore, there is concern that the accuracy of detection of the iris or the like of the eye may decrease, and there is room for further improvement in this respect.
An object of the present invention is to provide a detection system capable of suppressing a decrease in accuracy of detection of feature points of a face.
In order to achieve the above mentioned object, a detection system according to one aspect of the present invention includes a light source section including a plurality of light emitting devices, each of the plurality of light emitting devices being configured to emit near infrared light toward a different portion of a face of an occupant of a vehicle; an imaging section configured to capture a face image with reflected light of the near-infrared light emitted to the face of the occupant; and a control unit configured to detect feature points and a face image area of the face of the occupant from the face image captured by the imaging section, wherein based on a plurality of divided image areas obtained by dividing the face image area according to the plurality of light emitting devices, respectively, the control unit individually dims light from the light emitting devices corresponding to the respective divided image areas, and wherein the control unit includes; an extraction section configured to extract a measurement image area from each of the divided image areas based on the feature points included in each of the divided image areas; a calculation section configured to calculate an average value of pixel values based on all pixel values of a plurality of pixels included in each of the measurement image areas; a first storage section configured to store in advance, as a reference value corresponding to each of the divided image areas, an average value of pixel values based on all pixel values of a plurality of pixels included in each of the measurement image areas, which is obtained based on a face image captured under a referenced environment; a second storage section configured to store, as a measurement value corresponding to each of the divided image areas, an average value of pixel values based on all pixel values of a plurality of pixels included in each of the measurement image areas, which is obtained based on a face image captured under a measurement environment; and a dimming section configured to compare a difference of the reference value with respect to the measurement value with a threshold value, and reduces a light amount of the light emitting device corresponding to the divided image area when the difference is larger than or equal to the threshold value, or increases a light amount of the light emitting device corresponding to the divided image area when the difference is smaller than the threshold value, and wherein the measurement image area is formed based on a boundary line X that divides the face image area in Y direction, a boundary line Y that divides the face image area in X direction orthogonal to the Y direction, and end points in the X direction and the Y direction among the feature points included in each of the divided image areas.
The above and other objects, features, advantages and technical and industrial significance of this invention will be better understood by reading the following detailed description of presently preferred embodiments of the invention, when considered in connection with the accompanying drawings.
Hereinafter, an embodiment of a detection system according to the present invention will be described in detail with reference to the drawings. Note that the present invention is not limited by the following embodiment. That is, constituent elements in the following embodiment include those that can be easily assumed by those skilled in the art or those that are substantially the same, and various omissions, substitutions, and changes can be made without departing from the gist of the invention.
A detection system according to the present embodiment will be described with reference to
Note that the up-down direction used in the following description is an up-down direction of an imaging unit 2 constituting the detection system 1 as illustrated in
For example, as illustrated in
The imaging unit 2 emits near-infrared light to a face F of the driver D and captures face images 40 and 50 (see
The face image 40 is a face image of the driver D in which the face F of the driver D is imaged by the imaging unit 2 under a measurement environment. The measurement environment is, for example, an environment in which the driver D drives the vehicle 100 during daytime, nighttime, or the like. Under the measurement environment, there is a possibility that a shadow is generated on a part of the face F of the driver D, for example, due to external light (environmental light) entering a vehicle interior 101, a sunshade, or the like.
The face image 50 is captured under a referenced environment, and is, for example, an image registered at the time of using driving authentication or a face image of the driver D captured as calibration at the time of first use. The face image 50 is an image in which a luminance value of a face in the face image 50 is uniform. The referenced environment (reference environment) is an environment in which the luminance value of the face F in the face image 50 is uniform to some extent with respect to the measurement environment.
Specifically, the imaging unit 2 includes a board section 10, a light source section 11, and an imaging section 12.
The board section 10 constitutes an electronic circuit on which various electronic components are mounted and which electrically connects the electronic components, and is a so-called printed circuit board. In the board section 10, for example, a wiring pattern (printed pattern) is formed (printed) by a conductive member such as copper foil on an insulating layer made of an insulating material such as epoxy resin, glass epoxy resin, paper epoxy resin, or ceramic. The board section 10 is, for example, a multilayer obtained by stacking a plurality of insulating layers on which wiring patterns are formed (that is, a multilayer board). The board section 10 is formed in a rectangular shape, and the light source section 11 and the imaging section 12 are mounted on the board section 10 and are electrically connected to the board section 10.
The light source section 11 emits near-infrared light. The light source section 11 emits, for example, near infrared rays under a control of the control unit 3. As illustrated in
As illustrated in
The imaging section 12 captures the face image 40 with reflected light of near-infrared light emitted to the face F of the driver D. As illustrated in
The control unit 3 controls the imaging unit 2. The control unit 3 includes a control board 21 and a CPU22.
The control board 21 constitutes an electronic circuit on which various electronic components are mounted and which electrically connects the electronic components, and is a so-called printed circuit board. In the control board 21, for example, a wiring pattern is formed by a conductive member such as copper foil on an insulating layer made of an insulating material such as epoxy resin, glass epoxy resin, paper epoxy resin, or ceramic. The control board 21 is, for example, a multilayer obtained by stacking a plurality of insulating layers on which wiring patterns are formed (that is, a multilayer board). The CPU22 is mounted on the control board 21, and the CPU22 is electrically connected to the control board 21. In addition, the control board 21 is connected to the imaging unit 2 via a communication line T.
The CPU22 controls the imaging unit 2. The CPU22 includes, for example, an extraction section 23, a calculation section 24, a storage section 25, and a dimming section 26 illustrated in
As illustrated in
The extraction section 23 divides the extracted face image area 41 (or face image area 51) into four equal areas. Specifically, as illustrated in
The first divided image area 42A and the second divided image area 42B, and the third divided image area 42C and the fourth divided image area 42D are adjacent to each other in the X direction of the face image 40 (or face image 50) with a boundary line (equal division line) Y therebetween. In addition, the first divided image area 42A and the third divided image area 42C, and the second divided image area 42B and the fourth divided image area 42D are adjacent to each other in the Y direction of the face image 40 (or face image 50) with a boundary line (equal division line) X therebetween. Boundaries between the first divided image area 42A to the fourth divided image area 42D including the boundary lines X and Y are determined depending on the installed position of the imaging section 12, an irradiation range 31 of each LED, and the like.
As illustrated in
The calculation section 24 calculates an average value of pixel values based on all pixel values of a plurality of pixels included in each of at least two measurement image areas 45 (55) extracted by the extraction section 23. Specifically, the calculation section 24 stores the average value of pixel values based on all the pixels in each measurement image area 45 (55) in the storage section 25 as a measurement value (or reference value) corresponding to each of the first divided image area 42A to the fourth divided image area 42D (or each of the first divided image area 52A to the fourth divided image area 52D).
The storage section 25 stores the measurement value and the reference value calculated by the calculation section 24. The storage section 25 stores in advance the average value of pixel values based on all pixel values of a plurality of pixels included in the measurement image area 55, which is obtained based on the face image 50, as a reference value corresponding to each of the first divided image area 52A to the fourth divided image area 52D. In addition, the storage section 25 stores an average value of pixel values based on all pixel values of a plurality of pixels included in each of the first divided image area 42A to the fourth divided image area 42D, which is obtained based on the face image 40, as a measurement value corresponding to each of the first divided image area 42A to the fourth divided image area 42D.
The dimming section 26 causes the four LEDs, the first LED 11A to the fourth LED 11D, to emit light to perform dimming. The dimming section 26 causes all of the first LED 11A to the fourth LED 11D to emit light at a preset initial light amount. The dimming section 26 dims light from each of the first LED 11A to the fourth LED 11D. For example, the dimming section 26 can increase or decrease the light amount of the first LED 11A. The dimming section 26 performs dimming by increasing or decreasing the light amount of each of the first LED 11A to the fourth LED 11D based on the reference value corresponding to each of the first divided image area 52A to the fourth divided image area 52D and the measurement value corresponding to each of the first divided image area 42A to the fourth divided image area 42D stored in the storage section 25. The dimming section 26 compares a difference of a reference value with respect to the measurement value read from the storage section 25 with a threshold value. When the dimming section determines that the difference of the reference value with respect to the measurement value is larger than or equal to a first threshold value, the dimming section 26 determines that the corresponding divided image area 42 is too bright, and reduces the light amount of the first LED 11A to the fourth LED 11D corresponding to the first divided image area 42A to the fourth divided image area 42D. When the difference of the reference value with respect to the measurement value is smaller than or equal to a second threshold value, the dimming section 26 determines that the corresponding divided image area 42 is too dark, and increases the light amount of the first LED 11A to the fourth LED 11D corresponding to the first divided image area 42A to the fourth divided image area 42D. For example, the dimming section 26 can decrease or increase the light of only the first LED 11A corresponding to the first divided image area 42A among the four divided image areas, the first divided image area 42A to the fourth divided image area, according to the comparison result between the difference and the threshold value.
The relationship between the first threshold value and the second threshold value is first threshold value>second threshold value. When the difference between the reference value and the measurement value is zero (0), dimming is unnecessary. That is, when the difference is between the first threshold value and the second threshold value, it is not necessary for the dimming section 26 to decrease or increase light.
Next, the control of the dimming of light from the light source section 11 in the detection system 1 will be described with reference to a flowchart of
In step S1, the dimming section 26 causes all of the four LEDs, the first LED 11A to the fourth LED 11D, to emit light at a preset initial light amount, and emits near-infrared light to the face F of the driver D from each of the first LED 11A to the fourth LED 11D.
Next, in step S2, the imaging section 12 receives reflected light of the near-infrared light emitted to the face F of the driver D from all of the four LEDs, the first LED 11A to the fourth LED 11D, captures a face image 40 of the driver D, and outputs the captured face image 40 of the driver D to the control unit 3. The control unit 3 acquires the face image 40 of the driver D from the imaging section 12, and inputs the face image 40 to the face recognition middleware.
Next, in step S3, the extraction section 23 extracts a face image area 41 based on feature points 60 of the face F of the driver D from the face image 40 acquired in step S2.
Next, in step S4, the extraction section 23 divides the face image area 41 into four divided image areas, a first divided image area 42A to a fourth divided image area 42D.
Next, in step S5, the extraction section 23 extracts a measurement image area 45 based on the feature points 60 in each of the first divided image area 42A to the fourth divided image area 42D.
Next, in step S6, the calculation section 24 calculates an average value of pixel values based on all pixel values of a plurality of pixels included in each measurement image area 45, and stores the calculated average value in the storage section 25.
Next, in step S7, the dimming section 26 compares a difference of a reference value with respect to the measurement value read from the storage section 25 with a threshold value.
Next, in step S8, the dimming section 26 determines whether the difference is larger than or equal to a first threshold value. When it is determined that the difference is larger than or equal to the first threshold value, the dimming section 26 proceeds to step S9. On the other hand, when it is determined that the difference is not larger than or equal to the first threshold value, the dimming section 26 proceeds to step S10.
In step S9, the dimming section 26 reduces a light amount of an LED (at least one of the first LED 11A to the fourth LED 11D) corresponding to a divided image area in which the difference is larger than or equal to the first threshold value among the first divided image area 42A to the fourth divided image area 42D, and ends the present process.
In step S10, the dimming section 26 determines whether the difference is smaller than or equal to a second threshold value. When it is determined that the difference is smaller than or equal to the second threshold value, the dimming section 26 proceeds to step S11. On the other hand, when it is determined that the difference is not smaller than or equal to the second threshold value, the dimming section 26 ends the present process.
In step S11, the dimming section 26 increases a light amount of an LED (at least one of the first LED 11A to the fourth LED 11D) corresponding to a divided image area in which the difference is smaller than or equal to the second threshold value among the first divided image area 42A to the fourth divided image area 42D, and ends the present process.
The control unit 3 repeats the process from step S5 to step S11 until the dimming of light from the four LEDs, the first LED 11A to the fourth LED 11D, in the light source section 11 is completed. When dimming becomes unnecessary, the control unit 3 detects an iris or the like of an eye based on the face image 40 obtained at the timing when dimming becomes unnecessary. By doing so, a difference between luminance values in the face image area 41 decreases, as a result suppressing a decrease in accuracy of the detection of the feature points 60 of the face F.
As described above, the detection system 1 according to the present embodiment includes an imaging unit 2 including a light source section 11 and an imaging section 12, and a control unit 3. In the light source section 11, each of four LEDs, a first LED 11A to a fourth LED 11D, emits near infrared light toward a different portion of a face F of a driver D of a vehicle 100. The imaging section 12 captures a face image 40, 50 with reflected light of the near-infrared light emitted to the face F of the driver D from the light source section 11. The control unit 3 extracts feature points 60 of the face F of the driver D and a face image area 41 based on the captured face image 40, 50. The control unit 3 individually dims light from an LED corresponding to each face image area 41 based on four divided image areas, a first divided image areas 42A to a fourth divided image area 42D obtained by dividing the divided image area 42 according to the four LEDs, the first LED 11A to the fourth LED 11D, respectively.
With the above-described configuration, the detection system 1 can brighten an LED corresponding to a divided image area 42 in which a part of the face F of the driver D is shaded, for example, by a sunshade, or darken another LED corresponding to a divided image area 42 in which no shadow is generated. As a result, the detection system 1 can bring the face image 40 closer to the face image 50, reducing a change in brightness (luminance) of the entire face image area 41 obtained from the face image 40, as a result suppressing a decrease in accuracy of the detection of the feature points 60 of the face F. For example, in a case where the vicinity of the iris of the eye E in the face image area 41 is dark, the detection system 1 can suppress a decrease in accuracy of detection by increasing a light amount of a corresponding LED. In addition, the detection system 1 can dim light from the light source section 11 without changing the conventional device configuration. In addition, since the detection system 1 includes the light source section 11 that emits near-infrared light and the imaging section 12 that is a near-infrared camera, it is possible to acquire a face image 40 without requiring a large amount of light even at the time of capturing an image at nighttime or even in a case where the driver D wears sunglasses. In addition, the detection system 1 can reduce the power consumption by dimming light from the light source section 11, for example, during daytime, suppressing heat generation of the light source section 11, as a result extending the product life.
Some conventional driver monitoring systems adjust the amount of light by using a near-infrared light sensor that receives reflected light of near-infrared light emitted to an object (for example, Japanese Patent Application Laid-open No. S59-86973). In the above-described conventional technology, since the near-infrared light sensor is used to suppress a decrease in accuracy of detection, it is necessary to increase a component cost or to add a control circuit of the sensor. In contrast, the detection system 1 can suppress a decrease in accuracy of detection without using a near-infrared light sensor, thereby suppressing an increase in product cost.
In addition, in the detection system 1 according to the present embodiment, the dimming section 26 compares a difference of a reference value with respect to a measurement value with a threshold value whenever a face image 40 is captured under a measurement environment, and decreases a light amount of an LED corresponding to a divided image area 42 when the difference is larger than or equal to the threshold value, or increases a light amount of a light emitting device corresponding to a divided image area 42 when the difference is smaller than the threshold value. As a result, for example, in a case where the vicinity of the iris of the eye E in the face image area 41 is dark, a light amount of a corresponding LED can be increased, and in a case where the vicinity of the iris of the eye E in the face image area 41 is bright, a light amount of a corresponding LED can be decreased, so that it is possible to suppress a decrease in accuracy of detection of the iris of the eye or the like.
In the detection system 1 according to the present embodiment, the first divided image area 42A to the fourth divided image area 42D are rectangular areas divided equally according to the first irradiation range 31A to the fourth irradiation range 31D irradiated with near-infrared light by the first LED 11A to the fourth LED 11D, respectively. As a result, when the face image area 41 (or 51) has a rectangular shape, the face image area 41 (or 51) can be divided according to the positional relationship between each LED and each divided image area, making the correspondence between the plurality of divided image areas 42 (or 52) and the irradiation ranges 31 of the plurality of LEDs clear.
Note that, although the calculation section 24 calculates an average value of pixel values and a measurement value of a pixel value based on a measurement image area 45 (or 55) extracted by the extraction section 23 in the above-described embodiment, but the calculation section 24 is not limited thereto. For example, the calculation section 24 may be configured to calculate an average value of pixel values and a measurement value of a pixel value based on each of the first divided image area 42A to the fourth divided image area 42D (or each of the first divided image area 52A to the fourth divided image area 52D). As a result, the above-described effect can be obtained.
In addition, although the extraction section 23 extracts a face image area 41 based on the feature points 60 of the face F in the above-described embodiment, but the extraction section 23 is not limited thereto, and may extract feature points 60 of the face F based on the face image area 41.
In addition, although the measurement image area 45 (or 55) is a rectangular area in the above-described embodiment, the measurement image area 45 (or 55) is not limited thereto. For example, as illustrated in
In addition, although the light source section 11 includes four LEDs, the first LED 11A to the fourth LED 11D, in the above-described embodiment, the light source section 11 is not limited thereto. The positions at the plurality of LEDs are arranged are determined depending on the installed position of the imaging section 12, the irradiation range 31 of each LED, and the like.
In addition, although the imaging unit 2 is installed on the steering column, the imaging unit 2 is not limited thereto, and may be installed on an instrument panel, a dashboard, a room mirror, or the like.
In addition, although it has been described as an example in the above-described embodiment that the CPU22 includes an extraction section 23, a calculation section 24, a storage section 25, and a dimming section 26, and these functions are mounted on one IC, the CPU22 is not limited thereto, and the aforementioned functions may be mounted on a plurality of ICs in a distributed manner.
In addition, although it has been described in the above-described embodiment that the processing functions of the control unit 3 are realized by a single processor, but the control unit 3 is not limited thereto. The processing functions of the control unit 3 may be realized by combining a plurality of independent processors and causing each of the processors to execute a program. In addition, the processing functions of the control unit 3 may be realized by a single processing circuit or a plurality of processing circuits in an appropriately distributed or integrated manner. In addition, all or some of the processing functions of the control unit 3 may be realized by a program, or may be realized as hardware by wired logic or the like.
In addition, although the detection system 1 is applied to the vehicle 100 such as an automobile in the above-described embodiment, but the detection system 1 is not limited thereto, and may be applied to, for example, a ship, an aircraft, or the like as well as the vehicle. In addition, the detection system 1 is divided into the imaging unit 2 and the control unit 3, but the imaging unit 2 and the control unit 3 may be integrally configured.
The detection system according to the present embodiment is advantageous in that based on a plurality of divided image areas obtained by dividing the face image area according to the plurality of light emitting devices, respectively, light from the light emitting devices corresponding to the respective divided image areas can be individually dimmed, thereby suppressing a decrease in accuracy of the detection of the feature points of the face.
Although the invention has been described with respect to specific embodiments for a complete and clear disclosure, the appended claims are not to be thus limited but are to be construed as embodying all modifications and alternative constructions that may occur to one skilled in the art that fairly fall within the basic teaching herein set forth.
Number | Date | Country | Kind |
---|---|---|---|
2022-069254 | Apr 2022 | JP | national |
This application is a continuation application of International Application No. PCT/JP2023/011442 filed on Mar. 23, 2023 which claims the benefit of priority from Japanese Patent Application No. 2022-069254 filed on Apr. 20, 2022 and designating the U.S., the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2023/011442 | Mar 2023 | WO |
Child | 18802051 | US |