BIOLOGICAL INFORMATION MEASUREMENT DEVICE AND INPUT DEVICE UTILIZING SAME

Abstract
A biological information measurement device includes near-infrared cameras capable of detecting pupils of an object person, a detection unit capable of detecting pupil areas from pieces of image data obtained by the near-infrared cameras, a luminance acquisition unit that acquires luminances of skin areas serving as at least portions of the peripheries of the pupil areas, and a biological measurement unit that measures biological information of the object person from the luminances.
Description
BACKGROUND

1. Field of the Disclosure


The present disclosure relates to a biological information measurement device capable of measuring biological information.


2. Description of the Related Art


In Japanese Unexamined Patent Application Publication No. 2011-130996 described below discloses a biological activity measurement device capable of detecting the average luminance of a specific area such as an area between eyebrows or a forehead and obtaining biological information such as the pulse rate of a test subject, based on the average luminance.


In the arrangement described in Japanese Unexamined Patent Application Publication No. 2011-130996, in order to identify the area between eyebrows, the two-dimensional coordinates or three-dimensional coordinates of a whole on a face image are acquired and the area between eyebrows is identified based on three feature points on the coordinates.


However, in the method of Japanese Unexamined Patent Application Publication No. 2011-130996, it is thought that since calculation of the area between eyebrows is performed from the whole face image, a load on a control unit becomes large. In addition, while the area between eyebrows is detected from the three feature points, it is thought that, depending on an object person, there is a person whose feature point is unlikely to appear. In addition, in such a case, it becomes difficult to detect the area between eyebrows, thereby eventually leading to a measurement variation or measurement error of biological information.


Therefore, the present invention solves the above-mentioned problem of the related art and provides in particular a biological information measurement device capable of stably measuring biological information with a high degree of accuracy, compared with the related art, and an input device utilizing the biological information measurement device.


SUMMARY

An input device includes an image capturing unit that detects a pupil of an object person, a detection unit that detects a pupil area from image data obtained by the image capturing unit, a luminance acquisition unit that acquires a luminance of a skin area serving as at least a portion of the periphery of the pupil area, and a biological measurement unit that measures biological information of the object person from the luminance of the skin area.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a pattern diagram illustrating a state in which a driver (object person) is image-captured by a near-infrared camera (image capturing unit);



FIG. 2A is a front view of the near-infrared camera (image capturing unit) in the present embodiment and FIG. 2B is a side view illustrating an internal structure of the near-infrared camera;



FIG. 3A is a bright pupil image, FIG. 3B is a dark pupil image, and FIG. 3C is a pattern diagram illustrating a difference image between the bright pupil image and the dark pupil image;



FIG. 4A is a pattern diagram in which a surrounding area of a pupil area is identified as a skin area for measuring a luminance and FIG. 4B is a pattern diagram in which only an area located below the pupil area is identified as a skin area for measuring a luminance, within the skin area illustrated in FIG. 4A;



FIG. 5 is a block diagram of a biological information measurement device in the present embodiment and an input device utilizing the biological information measurement device;



FIG. 6A is a flowchart diagram from acquisition of the bright pupil image and the dark pupil image to activation of a predetermined input operation, FIG. 6B is a flowchart diagram from pupil tracking to calculation of a visual line vector, and FIG. 6C is a flowchart diagram from nostril detection to face direction detection;



FIG. 7 is a relationship diagram illustrating a relationship between a pupil image, a corneal reflection image, and a visual line calculation;



FIG. 8 is a pattern diagram illustrating a corneal reflection image;



FIGS. 9A to 9H are explanatory diagrams for explaining an algorithm for visual line calculation and FIG. 9I is a graph illustrating a relationship between a distance r between a pupil center and corneal reflection and an angle θ between a camera pupil vector and a visual line vector;



FIG. 10 is a pattern diagram for explaining a method for measuring a face direction;



FIG. 11 is a pattern diagram illustrating a relationship between two near-infrared cameras and a viewpoint;



FIG. 12 is a flowchart diagram of an input device utilizing a biological information measurement device in another embodiment (a second invention); and



FIG. 13 is a pattern diagram in which a surrounding area of a corneal reflection image is identified as a skin area for measuring a luminance.





DESCRIPTION OF THE EXEMPLARY EMBODIMENTS


FIG. 1 is a pattern diagram illustrating a state in which a driver (object person) is image-captured by a near-infrared camera (image capturing unit). In the present embodiment, as illustrated in FIG. 1, the image of a driver 1 who sits in a driver seat in a vehicle is captured using a near-infrared camera 2 and the pupils of the driver 1 are detected using the image data thereof.


The near-infrared camera 2 is arranged in front of the driver seat and arranged on, for example, an instrument panel. Alternatively, the near-infrared camera 2 may be installed in a portion of a steering supporting section 3.


As illustrated in FIG. 2A, it is preferable that two near-infrared cameras 2a and 2b are provided. As illustrated in FIGS. 2A and 2B, it is preferable that the near-infrared cameras 2a and 2b each include a lens 4, a plurality of first light-emitting elements 5, a plurality of second light-emitting elements 6, an imaging element (sensor substrate) 7 located posterior to the lens 4, and a chassis 8 that supports the lens 4, the individual light-emitting elements 5 and 6, and the imaging element 7.


Here, it is preferable that the first light-emitting elements 5 are 870-nm LEDs and the second light-emitting elements 6 are 940-nm LEDs. Note that the wavelengths are just examples and wavelengths other than those may be adopted.


The individual light-emitting elements 5 and 6 are mounted in a board 9, and an LED board in which the light-emitting elements 5 and 6 are arranged on the board 9 and the imaging element 7 are arranged in parallel. In addition, the two near-infrared cameras 2a and 2b are used while being synchronized with each other.


As illustrated in a block diagram of FIG. 5, a biological information measurement device 10 in the present embodiment includes the near-infrared cameras 2a and 2b, a detection unit 11, a luminance acquisition unit 12, a biological measurement unit 13, and a monitor 14. In addition, a control unit 15 is configured by putting together the detection unit 11, the luminance acquisition unit 12, and the biological measurement unit 13.


As illustrated in FIG. 5, in the detection unit 11, it is preferable that a pupil detection unit 16, a skin area detection unit 17, and a visual line detection unit 23 are provided.


In the pupil detection unit 16, a difference image between a bright pupil image and a dark pupil image is created. The bright pupil image and the dark pupil image will be described below.



FIG. 3A is a bright pupil image 18 of the driver (object person) 1, and FIG. 3B is a dark pupil image 19 of the driver (object person) 1.


First, it is preferable that, in a state in which, using the first light-emitting elements 5, the face of the driver 1 is irradiated with near-infrared rays whose wavelengths are 870 nm, a face image is captured by the imaging element 7. In the face image captured in this way, the bright pupil image 18 in which pupils 20 are image-captured so as to be significantly brighter than the other part is obtained as illustrated in FIG. 3A. On the other hand, it is preferable that, in a state in which, using the second light-emitting elements 6, the face of the driver 1 is irradiated with near-infrared rays whose wavelengths are 940 nm, a face image is captured using the imaging element 7. In the face image captured in this way, the dark pupil image 19 in which the pupils 20 are image-captured so as to be darker than the bright pupil image 18 in FIG. 3A is obtained as illustrated in FIG. 3B. The wavelengths of the light-emitting elements are set to respective wavelengths for obtaining bright pupils and dark pupils.


The capturing of the bright pupil image 18 and the capturing of the dark pupil image 19 are performed in a time division manner. In addition, since, in the present embodiment, there are the two near-infrared cameras 2a and 2b, the bright pupil image 18 and the dark pupil image 19 are acquired in each of the near-infrared camera 2a and the near-infrared camera 2b.


It is preferable that, in the pupil detection unit 16 illustrated in FIG. 5, differences image between the bright pupil images 18 and the dark pupil images 19 are created. One of difference images 21 is illustrated in FIG. 3C. It is possible for each of the difference images 21 to be obtained by, for example, subtracting the luminance values of the pixels of the corresponding dark pupil image 19 from the luminance values of the respective pixels of the corresponding bright pupil image 18.


As illustrated in FIG. 3C, one of the difference images 21 is left as an image in which the pupils 20 of the driver (object person) 1 are brighter than the peripheries thereof. It is preferable that bright portions illustrated in FIG. 3C are identified as pupil areas 22. Here, as illustrated in FIG. 3C, it is preferable that it is possible to acquire the two pupil areas 22. The bright pupil images and the dark pupil images described above correspond to those explained using a method similar to Japanese Unexamined Patent Application Publication No. 2008-125619. A case where it is assumed that, as a result of obtaining a pupil area using the above-mentioned method, it is possible to obtain only one pupil area 22 corresponds to a state in which it is possible to acquire obtain the pupil area of one eye, and it may be determined that the driver 1 does not face forward (an image capturing unit side). In such a case, it is possible to invite attention to looking away. It is preferable that only after it becomes possible to acquire the pupil areas 22 of two eyes, it is possible to control so as to measure biological information.


While, in the above description, face images captured by radiating near-infrared rays whose wavelengths are 940 nm are set as the dark pupil images 19 used for obtaining the difference images 21, face images captured by, for example, not radiating light such as infrared rays may be set as the dark pupil images 19.


Next, in the skin area detection unit 17, skin areas made available for measuring the biological information are acquired, based on the pupil areas 22.



FIG. 4A is a pattern diagram that magnifies the vicinity of an eye illustrated in, for example, one of the dark pupil images 19. Since the pupil areas 22 are identified based on FIG. 3C, a skin area 24 for acquiring a luminance is identified around each of the pupil areas 22. The skin areas 24 are identified at positions located predetermined distances away from the pupil areas 22 and located outside thereof.


In addition, an area 25 of an eye (a black eye portion) may be identified from each of the pupil areas 22, and the skin area 24 for acquiring a luminance may be identified from the corresponding area 25 of an eye.


If it is possible to identify the pupil areas 22, it is possible to estimate the area of a corresponding eye, based on the luminance or the like of an image of the periphery of each of the pupil areas 22, and accordingly, it is possible to keep a load on the detection unit 11 at a low level at the time of detecting the skin areas 24 located away from the pupil areas 22.


Binarization processing, labeling processing, screening processing, and so forth are arbitrarily executed in the detection unit 11.


In the luminance acquisition unit 12 illustrated in FIG. 5, the luminances of the skin areas 24 are acquired. At this time, it is preferable that the average values of luminances obtained from, for example, respective images captured in the near-infrared camera 2a and the near-infrared camera 2b or images captured at different wavelengths are continuously acquired. In addition, at this time, it is not necessary to acquire the luminances of all the pixels of each of the skin areas 24. In a case where a luminance is acquired from one portion of each of the skin areas 24 in this way, it is preferable that, as illustrated in FIG. 4B, the luminances of the skin areas 26 on located below the respective pupil areas 22 are acquired. An area located above each of the pupil areas 22 is an eyelid side and a blink makes it difficult to obtain a stable luminance. Therefore, by acquiring the luminances of the skin areas 26 on eye bag sides located below the respective pupil areas 22, it is possible to stably acquire luminances.


In the biological measurement unit 13 illustrated in FIG. 5, it is preferable that the biological information of the driver (object person) 1 is measured from the luminance of each of skin areas obtained by the luminance acquisition unit 12. The measurement of the biological information prepares the value of change in luminance of each of the skin areas during a given period of time. Next, a low-frequency noise due to a body movement or the like is removed, and an independent signal is extracted using an Independent component analysis method. There is a method in which the power spectrum of this independent signal is calculated and a signal having a frequency peak in the vicinity of a heart rate or a breathing rate is acquired, thereby setting this as an output signal. However, here the biological information or a measuring method therefor is not specifically limited.


As illustrated in FIG. 5, it is preferable that the input device 30 includes the biological information measurement device 10 and an input operation unit 31. Here, the input operation unit 31 may double as the monitor 14. Therefore, as an configuration element of the biological information measurement device 10, the monitor 14 may exist or may be omitted.


The biological information is transmitted from the biological measurement unit 13 to the input operation unit 31. Alternatively, it is preferable that pieces of information (pupil information, visual line information, and so forth) from the detection unit 11 illustrated in FIG. 5 are transmitted thereto. It is preferable that, in the input operation unit 31, based on these pieces of information transmitted from the biological information measurement device 10, a predetermined input operation or predetermined information transmission is executed. What input operation and what information transmission are performed will be described later.


It is preferable that, in the input operation unit 31, it is possible to predict the behavior of the driver (object person) 1 and execute an input operation. By focusing attention on, for example, the visual line of the driver 1, behavior prediction is performed based on the direction of the visual line, and an input operation based on the behavior prediction is executed.


It is preferred that the input device 30 illustrated in FIG. 5 in the present embodiment is mounted within a vehicle and an input operation based on the pieces of information from the biological information measurement device 10 illustrated in FIG. 5 is made available for drive assist. For example, to accelerate the response of automatic lighting by judging from the sizes of the pupils of the driver 1 may be cited.


In the present embodiment, the skin areas 24 and 26 are identified based on the pupil areas 22. Therefore, it is possible to stably obtain the skin areas 24 and 26, and it is possible to stably obtain the luminances of the skin areas 24 and 26 with a high degree of accuracy. Accordingly, it is possible to stably obtain, with a high degree of accuracy, the biological information obtained based on the luminances. It is possible to correctly detect the pupils without fail unlike the other part of the face of the object person. Therefore, in contrast, in a case where it is difficult to detect a pupil, it may be determined that the object person turns sideways or is asleep. Accordingly, in a case where it is difficult to detect a pupil, it is possible to draw the object person's attention to facing forward (a near-infrared camera 2 side) with eyes open.


In addition, according to the present embodiment, it is not necessary to construct three-dimensional data from an entire face image in such a manner as described in Japanese Unexamined Patent Application Publication No. 2011-130996. Accordingly, compared with the related art, it is possible to reduce a control load (calculation load) on a device, and it is possible to smoothly perform up to measurement of the biological information.


Next, using a flowchart illustrated in FIGS. 6A to 6C, steps from acquisition of an image to activation of an input operation will be described.


In a step ST1 in FIG. 6A, the bright pupil images 18 and the dark pupil images 19 are acquired (see FIGS. 3A and 3B). As illustrated in FIG. 7, it is preferable that, by image-capturing under a condition that near-infrared rays whose wavelengths are 870 nm are radiated by the first light-emitting elements 5, it is possible to obtain the bright pupil images 18. In addition, it is preferable that, by image-capturing under a condition that near-infrared rays whose wavelengths are 940 nm are radiated by the second light-emitting elements 6, it is possible to obtain the dark pupil images 19. The dark pupil images 19 may be captured by radiating no light from the light-emitting elements.


Next, in a step ST2 in FIG. 6A, the pupil areas 22 are identified based on the respective difference images 21 between the bright pupil images 18 in FIG. 3A and the dark pupil images 19 in FIG. 3B.


Next, as illustrated in FIG. 6A, in a step ST3, corneal reflection images are acquired. As illustrated in FIG. 7, it is preferable that it is possible to acquire the corneal reflection images, based on the dark pupil images 19 captured under the condition that near-infrared rays whose wavelengths are, for example, 940 nm are radiated.


In FIG. 8, one of corneal reflection images 35 is illustrated. FIG. 8 magnifies and illustrates the pupil portion of one of dark pupil images. As illustrated in the dark pupil image, it turns out that one of the corneal reflection images 35 brighter than a corresponding pupil 36 reflected darkly is reflected. The corneal reflection images 35 are the reflection images of a light source reflected from corneas of the driver 1.


Subsequently, in a step ST4 illustrated in FIG. 6A, visual line calculation is performed. As illustrated in FIG. 7, it is possible to perform the visual line calculation using the pupil areas and the corneal reflection images. An algorithm for the visual line calculation will be described using FIG. 9. Here, the algorithm for the visual line calculation illustrated in FIG. 9 will be described using a method similar to that disclosed in WO2012020760 (A1).


As illustrated in FIG. 9A, it is assumed that a visual target plane 40 exists in front of the driver (object person) 1. This visual target plane 40 is, for example, a display. It is assumed that this visual target plane 40 is a surface parallel to an X-Y plane. A symbol P is the pupil center of the driver (object person) 1 and PT is a visual line vector. Q is a point of regard.


The corresponding near-infrared camera 2 is mounted in parallel to a virtual viewpoint plane 41 serving as a surface parallel to an X′-Y′ plane. In other words, a direction perpendicular to the virtual viewpoint plane 41 is the optical axis direction of a camera. T is a point of regard on the virtual viewpoint plane 41. PT′ is a camera pupil vector.


As illustrated in FIG. 9A, an angle between the visual line vector PT and the camera pupil vector PT′ is θ. In addition, an angle between a direction from a camera center to the point of regard T on the virtual viewpoint plane 41 and an X′-axis is φ.



FIG. 9B is a pattern diagram of one of pupil periphery images. G is one of the corneal reflection images. An angle between a line linearly connecting one of the corneal reflection images G with one of the pupil centers P and an X-axis is φ′.


E illustrated in each of FIGS. 9C to 9E illustrates one of eyeballs of the driver (object person) 1 in such a manner as a pattern diagram. In FIGS. 9C to 9E, the directions of the eyeball are different. In FIG. 9C, the directions of the visual line vector PT and the camera pupil vector PT′ coincide with each other. In other words, θ illustrated in FIG. 9A is zero. At this time, in the image of FIG. 9F, the pupil center P and the corneal reflection image G coincide with each other. In other words, an interval |r0|=0 between the corresponding pupil center P and the corresponding corneal reflection image G is satisfied.


Next, in FIG. 9D, the visual line vector PT and the camera pupil vector PT′ do not coincide with each other, and an angle θ is generated between the visual line vector PT and the camera pupil vector PT′. At this time, in the image of FIG. 9G, an interval |r1| is generated between the corresponding pupil center P and the corresponding corneal reflection image G.


In addition, in FIG. 9E, the angle θ between the visual line vector PT and the camera pupil vector PT′ is larger than that in FIG. 9D. At this time, in the image of FIG. 9H, an interval |r2| is generated between the corresponding pupil center P and the corresponding corneal reflection image G, and this interval |r2| is larger than the interval |r1| illustrated in FIG. 9G.


Here, a relationship between (θ,φ) illustrated in FIGS. 9A and 9C to 9E and (|r|,φ′) illustrated in FIGS. 9B and 9F to 9H is a one-to-one correspondence.


In other words, the interval (distance) |r| between the corresponding pupil center P and the corresponding corneal reflection image G increases with an increase in the distance of the visual line from a camera optical axis (with an increase in θ). Accordingly, between θ and |r|, there is a relationship illustrated in FIG. 9I. It is preferable that, based on this relationship, it is possible to estimate the visual line.


Specifically, in a step ST10 illustrated in FIG. 6B (the specific step of the step ST4 in FIG. 6A), the displacement amounts of the corneal reflection images are calculated. The displacement amount of each of the corneal reflection images is indicated as the interval (distance) |r| between the corresponding pupil center P and the corresponding corneal reflection image G, illustrated in each of FIGS. 9F to 9H.


Note that it is preferable that, by executing pupil tracking based on improvement of resolution, it is possible to determine the corresponding pupil center P (step ST11).


In a step ST12 in FIG. 6B, the corresponding pupil center P and the corresponding corneal reflection image G are transformed to an X-Y coordinate system. Subsequently, in a step ST13 illustrated in FIG. 6B, a vector of pupil center P-corneal reflection image G is calculated, and based on trigonometry as illustrated in the relationship diagrams in FIGS. 9A and 9B, the point of regard T on the virtual viewpoint plane 41 is calculated (step ST14). In addition, in a step ST15, the visual line vector PT is calculated.


In addition, in the step ST4 in FIG. 6A, a face direction may be estimated. FIG. 10 is a perspective view when the face of the driver (object person) 1 is viewed from an oblique front side. It is preferable that, from the positions of pupils 45 detected by the steps ST1 and ST2 in FIG. 6A, the existence range of the nostrils is estimated and from the existence range (area), right and left nostrils 47 and 48 are detected (step ST20 in FIG. 6C). At this time, only one nostril may be detected. Note that if the nostrils are detected in a previous image (previous frame), nostril positions are estimated from the nostrils positions of the previous image and tracked, for a subsequent image. In addition, in a case where no nostrils are detected in the previous image, a search for nostrils is executed from an entire image.


By the way, as for the detection of the nostrils 47 and 48, it is possible to roughly determine the existence range of the nostrils from the positions of the pupils 45, and it is possible to confirm the nostrils, based on luminance measurement within the range. In addition, by performing the binarization processing on an image, it becomes easy to detect the nostrils.


Subsequently, based on stereo calculation, the three-dimensional coordinates of a midpoint 43 connecting the nostrils 47 and 48 and the midpoint pupils 45 are calculated (step ST21 illustrated in FIG. 6C).


In addition, as illustrated in FIG. 10, it is preferable that a line 44 normal to a triangle obtained by connecting the pupils 45 located on both right and left sides and the midpoint 43 with one another is obtained and this normal line 44 is estimated as the direction of the face (step ST22 in FIG. 6C).


Subsequently, in a step ST5 in FIG. 6A, the skin areas 24 surrounding the respective pupil areas 22 are identified (see FIGS. 4A and 4B). As illustrated in FIG. 4B, the respective skin areas 26 may be portions of the peripheries of the respective pupil areas 22, and in that case, it is preferred that the luminances of the skin areas 26 located below the respective pupil areas 22 are selected. In addition, in a step ST6 in FIG. 6A, the luminances of the skin areas 24 and 26 are acquired. It is preferred that the luminances are average values. The average value of luminances may be obtained using the average of the luminances of a corresponding skin area image-captured by a plurality of imaging elements or the average of the luminances of a corresponding skin area image-captured by different wavelengths. In addition, in a step ST7, based on the luminances of skin areas, biological information is measured. According to the present embodiment, it is possible to obtain pieces of biological information such as a heart rate, a breathing rate, and a pulse rate.


In addition, in a step ST8 illustrated in FIG. 6A, the biological information is sent to the monitor 14 and displayed on the monitor 14. The monitor 14 is provided in, for example, an intermediate portion between the near-infrared cameras 2a and 2b illustrated in FIG. 2A. Alternatively, the monitor 14 may be an operation panel or the like that configures the input device 30. A touch panel or the like of a car navigation device arranged in a center console corresponds to the operation panel.


In addition, as illustrated in a step ST9 in FIG. 6A, the biological information is transmitted to the input operation unit 31 that configures the input device 30. In addition, pieces of information such as the face images (the bright pupil images and the dark pupil images) acquired in the step ST1, the pupil areas acquired in the step ST2, the corneal reflection images acquired in the step ST3, and the visual line direction and the face direction in FIG. 6C, acquired in the step ST4, are transmitted to the input operation unit 31. The number of pieces of information to be transmitted to the input operation unit 31 may be one or two or more.


It is preferable that, in the input operation unit 31, based on the information from the biological information measurement device 10, a predetermined input operation or predetermined information transmission is executed. Hereinafter, a specific example will be illustrated.


First, based on the visual line direction obtained in the step ST4 in FIG. 6A, it is possible to perform input assistance (drive assist). For example, as an input based on the visual line detection, sound volume or the like is allowed to be set using the visual line while a steering switch is pressed. In addition, as selection based on the visual line, selection of a control device or the like is allowed to be performed using the visual line. In addition, as direction indicator assistance, a blinker is turned on and a backward left screen is displayed on the monitor, at the time of viewing, for example, a sideview mirror. In addition, if a blinker is turned on without viewing a sideview mirror, warning sound is emitted. In addition, if, after warning display is performed on the monitor, the driver 1 views that alert cancellation is executed. In addition, if the frequency of looking around decreases, an alert such as “please pay attention to surroundings” or “please take a break” may be issued or an action such as vibrating a sheet may be taken.


In addition, as an input based on the pupil detection, ambient brightness is determined from the pupil sizes and the response of automatic lighting is accelerated. In addition, based on the pupil sizes, a mirror angle is controlled. In addition, based on the pupil sizes, the transmittance of a windshield is adjusted.


In addition, in a case where it is difficult to confirm the pupils, there is a possibility of falling asleep. Therefore, after displaying an alert on a meter or the like, it is possible to confirm, based on the subsequent pupil detection, whether that display is viewed.


In addition, lips of mouth are detected from the positions of the pupils, and based on the movements of the lips of mouth, it is possible to improve the degree of accuracy of a sound input.


If looking away or looking aside is detected in, for example, a case where it is possible to confirm the pupil of only one eye, warning sound is emitted or visual line navigation is executed.


In addition to these, based on the visual line direction, it is possible to adjust the display height of a meter or the like and it is possible to cause a seat height or a handle height to be automatically adjusted. In addition, based on a distance between right and left pupils, distances from nostrils, or the shapes of nostrils, it is possible to execute personal authentication.


In the present embodiment, the two near-infrared cameras 2a and 2b are provided. While, in the present embodiment, the number of the near-infrared cameras is set to one, it is preferable that the near-infrared cameras 2a and 2b whose number is two or more (at least two) are installed, thereby enabling a distance from the driver (object person) 1 to be obtained using the trigonometry. As illustrated in FIG. 11, a visibility based on an image 50 that reflects a target object R using the near-infrared camera 2a and a visibility based on an image 51 that reflects a target object R using the near-infrared camera 2b are different from each other. From this difference between visibilities, it is possible to identify the position of the target object R using the trigonometry. Accordingly, in the present embodiment, using the trigonometry, it is possible to obtain a distance from the driver (object person) 1, from images captured by the near-infrared cameras 2a and 2b.


In addition, in the embodiment of FIG. 6A, the steps ST3, ST4, ST8, and ST9 are not essential steps but selective steps.


In the above-mentioned embodiment (the first embodiment), the pupil areas are identified in the step ST2 in FIG. 6A, and the skin areas for measuring luminances are identified based on the pupil areas.


In contrast, in a second embodiment illustrated in FIG. 12, it is preferable that, in a step ST30, the corneal reflection images 35 are acquired, and in a step ST31, the skin areas 24 are identified based on the corneal reflection images 35. As illustrated in FIG. 7, it is preferable that the corneal reflection images 35 may be obtained based on, for example, the dark pupil images 19.


Unlike the first embodiment, in this second embodiment, it is not essential to acquire the pupil areas 22. However, in place thereof, it is essential to acquire the corneal reflection images 35. In addition, as illustrated in FIG. 13, the skin areas 24 are identified around the respective corneal reflection images 35. The skin areas 24 are set at positions located a predetermined distance away from the respective corneal reflection images 35. Based on the luminances or the like of images, it is possible to identify the areas of eyes of the object person from the corneal reflection images 35, and it is possible to set the skin areas 24 around the identified eyes.


A step ST32 illustrated in FIG. 12 corresponds to the step ST6 in FIG. 6A, a step ST33 in FIG. 12 corresponds to the step ST7 in FIG. 6A, a step ST34 in FIG. 12 corresponds to the step ST8 in FIG. 6A, and a step ST35 in FIG. 12 corresponds to the step ST9 in FIG. 6A.


Note that, in the second embodiment illustrated in FIG. 12, it is preferable that, as illustrated in the steps ST1 and ST2 in FIG. 6A, it is possible to identify the pupil areas 22, based on the respective difference images between the bright pupil images 18 and the pupil images 19. In such a case, it becomes possible to perform the visual line calculation and so forth, as illustrated in FIG. 7.


While being not limited to vehicle applications, the biological information measurement device 10 of the present embodiment and the input device 30 utilizing the biological information measurement device 10 are used for vehicle applications, thereby enabling the biological information of the driver to be obtained, during driving, and, based on the biological information, it is possible to perform drive assist or the like.


By tracking, for example, pupils, the behavior prediction may be determined based on the tracking result thereof.


In addition, based on pieces of information (the pupil information, the visual line information, the biological information, and so forth) obtained from the biological information measurement device 10, it is possible to determine whether or not, for example, falling asleep. In addition, in such a case, it is possible to invite an early attention using sound or the like, and it is preferable that, by predicting the behavior of the driver (object person), it is possible to execute a predetermined input operation.


While, in the above-mentioned embodiment, the driver is set as an object person whose biological information is to be measured, an occupant in a passenger seat or the like may be set as an object person without being limited to the driver.

Claims
  • 1. A biological information measurement device comprising: an image capturing unit that detects a pupil of an object person;a detection unit that detects a pupil area from image data obtained by the image capturing unit;a luminance acquisition unit that acquires a luminance of a skin area serving as at least a portion of the periphery of the pupil area; anda biological measurement unit that measures biological information of the object person from the luminance of the skin area.
  • 2. The biological information measurement device according to claim 1, wherein the image capturing unit includes an imaging element and a light-emitting element that irradiate the object person with light.
  • 3. The biological information measurement device according to claim 2, comprising a plurality of imaging elements.
  • 4. The biological information measurement device according to claim 2, wherein the light-emitting element includes a first light-emitting element that radiates an infrared ray having a first wavelength and a second light-emitting element that radiates an infrared ray having a second wavelength longer than the first wavelength,a bright pupil image is captured under a condition that the infrared ray having the first wavelength is radiated, and a dark pupil image is captured while the infrared ray having the second wavelength is radiated or the infrared ray is not radiated, andthe pupil area is detected based on a difference image between the bright pupil image and the dark pupil image, in the detection unit.
  • 5. The biological information measurement device according to claim 2, wherein the detection unit detects a corneal reflection image.
  • 6. The biological information measurement device according to claim 1, wherein in the luminance acquisition unit, an average value of luminances of the skin area image-captured using the imaging elements or different wavelengths is acquired and input to the biological measurement unit.
  • 7. The biological information measurement device according to claim 1, wherein the detection unit executes pupil tracking.
  • 8. The biological information measurement device according to claim 1, wherein the detection unit detects a visual line of the object person.
  • 9. The biological information measurement device according to claim 1, wherein the detection unit detects a face direction of the object.
  • 10. The biological information measurement device according to claim 1, wherein the luminance acquisition unit acquires a luminance of the skin area located below the pupil area.
  • 11. The biological information measurement device according to claim 1, wherein in a case where the pupil areas of two eyes of the object person are acquired, the luminance acquisition unit acquires the luminance of the skin area.
  • 12. The biological information measurement device according to claim 1, wherein the biological information measurement device is arranged within a vehicle.
  • 13. An input device comprising: a biological information measurement device comprising: an image capturing unit that detects a pupil of an object person;a detection unit that detects a pupil area from image data obtained by the image capturing unit;a luminance acquisition unit that acquires a luminance of a skin area serving as at least a portion of the periphery of the pupil area; anda biological measurement unit that measures biological information of the object person from the luminance of the skin area; and an input operation unit, whereina predetermined input operation or predetermined information transmission is executed based on information from the biological information measurement device.
  • 14. The input device according to claim 13, wherein the input operation is executed by predicting a behavior of the object person.
  • 15. The input device according to claim 13, wherein the input operation is made available for drive assist.
Priority Claims (1)
Number Date Country Kind
2012-247983 Nov 2012 JP national
CLAIM OF PRIORITY

This application is a Continuation of International Application No. PCT/JP2013/080265 filed on Nov. 8, 2013, which claims benefit of priority to Japanese Patent Application No. 2012-247983 filed on Nov. 12, 2012. The entire contents of each application noted above are hereby incorporated by reference.

Continuations (1)
Number Date Country
Parent PCT/JP2013/080265 Nov 2013 US
Child 14709058 US