The present disclosure relates to a skin analysis apparatus.
Heretofore, the condition of the facial skin of a user has been analyzed based on face images captured by photographing the user's face from a plurality of different directions. Japanese Unexamined Patent Application Publication No. 2005-211581 discloses a face photographic apparatus in which a movable photography lighting unit having a camera and a lighting device rotates around a user's face to photograph the face from different directions.
An apparatus in which machine moves to photograph a user's face from different directions, as in Japanese Unexamined Patent Application Publication No. 2005-211581, tends to increase in its installation area, and for example, it is difficult to install the apparatus in a small space, such as a clinic.
Meanwhile, although an apparatus for photographing a user's face in different orientations by using a camera in front of the user as the user changes the orientation of his or her face makes it possible to reduce the installation area, it is difficult to reliably photograph the face in different orientations since the user's face is not fixable.
One non-limiting and exemplary embodiment provides a skin analysis apparatus that can reliably photograph a user's face in different orientations by using a camera in front of the user.
In one general aspect, the techniques disclosed here feature a skin analysis apparatus including: a housing; a camera and a display provided on a major surface of the housing; auxiliary mirrors, one of the auxiliary mirrors having a side end portion attached to a left-end portion of the housing, and the other auxiliary mirror having a side end portion being attached to a right-end portion of the housing; and a controller that causes the camera to capture images of a user's front-view and side-view face and that analyzes skin of the user's face by using the face images. An internal angle θ formed by a major surface of the display and a major surface of each auxiliary mirror is an angle at which the camera is capable of capturing an image of the user's side-view face while the user's front-view face is seen in the auxiliary mirror.
According to one aspect of the present disclosure, it is possible to reliably photograph a user's face in different orientations by using a camera in front of the user.
It should be noted that general or specific embodiments may be implemented as a system, a method, an integrated circuit, a computer program, or a recording medium or may be implemented by an arbitrary combination of a system, an apparatus, a method, an integrated circuit, a computer program, and a recording medium.
Additional benefits and advantages of the disclosed embodiments will become apparent from the specification and drawings. The benefits and/or advantages may be individually obtained by the various embodiments and features of the specification and drawings, which need not all be provided in order to obtain one or more of such benefits and/or advantages.
Embodiments of the present disclosure will be described in detail with reference to the accompanying drawings, as appropriate. However, an overly detailed description may be omitted herein. For example, a detailed description of already well-known things and a redundant description of substantially the same configuration may be omitted herein. This is to avoid the following description becoming overly redundant and to facilitate understanding of those skilled in the art.
A user 2 is seated in front of a skin analysis apparatus 10, as illustrated in
The skin analysis apparatus 10 displays, on a display 102, a photography guide user interface (UI) 300 (see
The skin analysis apparatus 10 instructs the user 2 so as to face straight ahead and captures a front-view face image of the user 2. The skin analysis apparatus 10 also instructs the user 2 to turn his or her face to the left and captures a right-side-view face image of the user 2. Also, the skin analysis apparatus 10 instructs the user 2 to turn his or her face to the right and captures a left-side-view face image of the user 2 (S12). The captured face images are referred to as “post-photography face images”. Instead of instructing the user 2 so as to change the orientation of his or her face, the skin analysis apparatus 10 may automatically rotate a chair C in which the user 2 is seated to capture the right-side-view and left-side-view face images of the user 2.
The skin analysis apparatus 10 performs facial-part recognition processing on the post-photography face images (S13). Facial parts are characteristic parts in the face, and examples thereof include the contour of the face, the eyes, the nose, the mouth, the eyelids, and hairline. The facial parts may be represented as facial portions, facial organs, facial feature parts, or the like.
Based on the positions of the facial parts recognized in S13, the skin analysis apparatus 10 sets areas in which skin analysis is to be performed on the post-photography face images (the areas are hereinafter referred to as “skin analysis areas”) (S14).
The skin analysis apparatus 10 executes the skin analysis on each skin analysis area set in S14 (S15).
The skin analysis apparatus 10 displays a result of the skin analysis, executed in S15, on the display 102 (S16).
By using the skin analysis apparatus 10, as described above, the user 2 can easily undergo the skin analysis. A detailed description will be given below.
Next, a functional configuration of the skin analysis apparatus 10 will be described with reference to
The skin analysis apparatus 10 includes, for example, the camera 101, the display 102, a speaker 103, an input interface 104, a storage unit 105, and a controller 106. The skin analysis apparatus 10 may also be connected to a database 90.
The camera 101 photographs the face of the user 2. Although the camera 101 is built into the skin analysis apparatus 10 in
The display 102 displays images and information. Although the display 102 is built into the skin analysis apparatus 10 in
The speaker 103 outputs sound. For example, the speaker 103 outputs sound for notifying that the photography is started, the photography is ended, and so on.
The input interface 104 receives instructions from the user 2. The skin analysis apparatus 10 may include a plurality of input interfaces 104. For example, the skin analysis apparatus 10 may include a touch panel, a mouse, a keyboard, a button for photography instruction, and a microphone for voice input as the input interfaces 104. Each input interface 104 may also be a device independent from the skin analysis apparatus 10. In such a case, the input interface 104 transmits input data to the skin analysis apparatus 10 through a predetermined cable communication or wireless communication.
The storage unit 105 stores data used by the controller 106. The storage unit 105 may be a volatile memory, such as a dynamic random-access memory (DRAM), or a nonvolatile memory, such as a solid-state drive (SSD). Alternatively, the storage unit 105 may be a combination of a volatile memory and a nonvolatile memory.
The controller 106 is, for example, a central processing unit (CPU) and realizes functions of the skin analysis apparatus 10. For example, by executing a computer program stored in the storage unit 105, the controller 106 realizes functions associated with a photography processor 201, a facial-part recognizer 202, an analysis-area setter 203, and a skin analysis executor 204, which are described below.
The photography processor 201 generates the photography guide UI 300 and displays it on the display 102. The photography processor 201 also displays a during-photography-face image on the display 102 in real time. The photography processor 201 captures a front-view face image, a left-side-view face image, and a right-side-view face image of the user to generate post-photography face images.
For example, the facial-part recognizer 202 recognizes facial parts by performing the following processing. That is, first, the facial-part recognizer 202 extracts feature points from each face image by using a known image processing technique. Next, based on the extracted feature points, the facial-part recognizer 202 recognizes facial parts, such as the facial contour, eyes, nose, mouth, eyelid, and hairline. The facial-part recognizer 202 may perform facial-part recognition processing not only on the post-photography face images but also on face images during photography.
Based on the positions of the facial parts recognized by the facial-part recognizer 202, the analysis-area setter 203 sets at least one skin analysis area in any of the face images.
The skin analysis executor 204 executes skin analysis on each skin analysis area set in the face images by the analysis-area setter 203. For example, the skin analysis executor 204 applies known image processing to each skin analysis area in the face images to analyze, for example, the amounts of wrinkles, freckles, and/or pores.
Face images of users 2 and results of skin analysis on the face images are associated with each other and are managed in the database 90. Although the database 90 is a device independent from the skin analysis apparatus 10 in
Next, a physical configuration of the skin analysis apparatus 10 according to a first embodiment will be described with reference to
As illustrated in
The housing 11 has a flat-plate shape and has the front mirror 21, the camera 101, and the display 102 on its major surface facing the user 2. The display 102 is provided inside the front mirror 21. Alternatively, the display 102 may have a configuration integrated with the front mirror 21 (e.g., a mirror display having a half mirror). The camera 101 is provided above the display 102. The upper light 13 is provided at an upper end portion of the housing 11. The upper light 13 may be constituted by a plurality of light-emitting diode (LED) elements.
The left auxiliary portion 12a has a flat-plate shape and has, on its major surface facing the user 2, a mirror (left auxiliary mirror) 22a and direction instructions LED 23a, which are examples of light sources. Similarly, the right auxiliary portion 12b has a flat-plate shape and has, on its major surface facing the user 2, a mirror (right auxiliary mirror) 22b and direction instruction LEDs 23b, which are examples of light sources. Details of the direction instruction LEDs 23a and 23b are described later. When the direction instruction LEDs 23a on the left auxiliary portion 12a and the direction instruction LEDs 23b on the right auxiliary portion 12b are described without discrimination therebetween, they are referred to as “direction instruction LEDs 23”.
As illustrated in
As illustrated in
The angle (internal angle) θmax formed by the major surface of the left auxiliary mirror 22a and the major surface of the front mirror 21 when the left auxiliary portion 12a is fully opened is an angle at which both eyes and the contour of the right-side-view face of the user 2 are appropriately captured by the camera 101 when the user 2 turns his or her face to the left, and the front-view face is seen in the left auxiliary mirror 22a. The hinge 31a has a lock mechanism for securing the left auxiliary portion 12a at the angle θmax. The right auxiliary portion 12b and the hinge 31b have structures that are the same as or similar to those of the left auxiliary portion 12a and the hinge 31a.
The left auxiliary portion 12a is provided with a marker 40a for adjusting the position and the size of the front-view face of the user 2 when the front-view face is seen in the left auxiliary mirror 22a. For example, the left auxiliary mirror 22a is provided with markers 41a for adjusting the positions of both eyes (these markers are hereinafter referred to as “eye markers”). The eye markers 41a may be provided at a height that is the same as the camera 101. The left auxiliary portion 12a may be provided with a marker for adjusting the position of the contour of the face (this marker is hereinafter referred to as a “contour marker”, not illustrated), instead of or in addition to the eye markers 41a. Thus, when the user 2 turns to the left auxiliary portion 12a and adjusts the positions of both eyes and/or the contour of the face to the eye markers 41a and/or the contour marker, respectively, it is possible to reliably capture an image of the right-side-view face. The same also applies to eye markers 41b and a contour marker (not illustrated) on the right auxiliary portion 12b.
When the camera 101 is provided inside the front mirror 21, it is difficult for the user 2 to visually recognize the position of the camera 101. Accordingly, an LED 50, which is one example of a light source, is provided adjacent to the camera 101, and the controller 106 turns on the LED 50 during photography. This allows the user 2 to visually recognize the position of the camera 101 during photography and to direct his or her line-of-sight to the camera 101.
Next, the photography guide UI 300 will be described with reference to
The photography guide UI 300 has, for example, a during-photography-face image area 310, past-face-image areas 320, photograph buttons 330, and a face position guide 400.
A during-photography-face image of the user 2 is displayed in the during-photography-face image area 310. Face images of the same user 2 which were photographed in the past (these face images are hereinafter referred to as “past face images”) are displayed in the past-face-image areas 320. The past face images are stored in the database 90.
For capturing an image of the front-view face, the photography processor 201 displays front-view past face images of the same user 2 in the past-face-image areas 320. Similarly, for capturing an image of the right-side-view face, the photography processor 201 displays right-side-view past face images of the same user 2 in the past-face-image areas 320, and for capturing an image of the left-side-view face, the photography processor 201 displays left-side-view past face images of the same user 2 in the past-face-image areas 320. Since the past face images are displayed together with the during-photography-face image, the user 2 can adjust the position, the size, and the orientation of the during-photography-face image so that they match the position, the size, and the orientation of the past face images by moving the position of the face. Thus, a skin analysis result of the past face images and a skin analysis result of the post-photography face images can be compared with each other with higher accuracy.
The photography processor 201 displays the face position guide 400 in the during-photography-face image area 310. The face position guide 400 includes a face contour line guide 401, eye position guides 402, and a face center line guide 403. The face contour line guide 401, the eye position guides 402, and the face center line guide 403 may have different arrangements depending on the orientation of the face to be photographed, as illustrated in
During photography of a face image, the user 2 adjusts the contour of the during-photography-face image to the face contour line guide 401, adjusts the eye positions in the during-photography-face image to the eye position guides 402, and adjusts the center line (the ridge of the nose) in the during-photography-face image to the face center line guide 403. This allows the photography processor 201 to capture a face image at an appropriate position, with an appropriate size, and in an appropriate orientation for performing skin analysis.
The photography processor 201 may use color of the face position guide 400 to indicate whether or not the during-photography-face image matches the position, the size, and the orientation of the face position guide 400. For example, when the facial-part recognizer 202 succeeds in recognizing facial parts, and the during-photography-face image matches the position, the size, and the orientation of the face position guide 400, the photography processor 201 may switch the color of the face position guide 400 to blue. When the facial-part recognizer 202 succeeds in recognizing facial parts, and the during-photography-face image does not match the position, the size, and the orientation of the face position guide 400, the photography processor 201 may switch the color of the face position guide 400 to red. Also, when the facial-part recognizer 202 fails in recognizing facial parts, the photography processor 201 may switch the color of the face position guide 400 to orange. When the during-photography-face image does not match the position, the size, and the orientation of the face position guide 400, the photography processor 201 does not have to start photography. This makes it possible to efficiently capture an appropriate face image for skin analysis.
The photograph buttons 330 are respectively provided at a left-end portion and a right-end portion in the photography guide UI 300. Thus, at whichever of the left and right sides of the skin analysis apparatus 10 health personnel (e.g., a nurse) or the like is situated, the health personnel can touch the photograph button 330 without crossing between the camera 101 and the display 102 and the face of the user 2. The photograph button 330 may be provided at one of the left-end portion and the right-end portion in the photography guide UI 300, and the position of the photograph button 330 may be switchable through setting.
While the user 2 is facing one of the auxiliary mirrors 22 (i.e., during photography of the side view of the face), he or she cannot see the face position guide 400 displayed on the display 102 in front of the user 2. When the user 2 is looking in the auxiliary mirror 22, the direction instruction LEDs 23 on each auxiliary portion 12 are used in order to give guidance for the orientation of the face to the user 2 so that a during-photography-face image of the side view of the face matches the face position guide 400.
Next, a description will be given of one example of the operation of the direction instruction LEDs 23a provided on the left auxiliary portion 12a. The same also applies to the operation of the direction instruction LEDs 23b provided on the right auxiliary portion 12b.
For example, when a during-photography-face image of the right view of the face is facing upward too much, the photography processor 201 turns on (or blinks) a direction instruction LED 23aD indicating “down”. That is, the user 2 is instructed so as to face downward a little. When a during-photography-face image of the right-side view of the face is facing downward too much, the photography processor 201 turns on (or blinks) a direction instruction LED 23aU indicating “up”. That is, the photography processor 201 instructs the user 2 so as to face upward a little. When a during-photography-face image of the right-side view of the face is facing leftward too much, the photography processor 201 turns on (or blinks) a direction instruction LED 23aR indicating “right”. That is, the photography processor 201 instructs the user 2 so as to face rightward a little. When a during-photography-face image of the right-side view of the face is facing rightward too much, the photography processor 201 turns on (or blinks) a direction instruction LED 23aL indicating “left”. That is, the photography processor 201 instructs the user 2 so as to face leftward a little. When a during-photography-face image of the right-side view of the face is in an appropriate orientation, the photography processor 201 turns on (blinks) all the direction instruction LEDs 23aD, 23aU, 23aR, and 23aL. That is, the photography processor 201 notifies the user 2 that the during-photography-face image is in a correct orientation.
Next, one example of the operation of the photography processor 201 when photography is started and when photography is completed will be described in detail.
Immediately before the right-side view of the face is photographed, and immediately before the left-side view of the face is photographed, the photography processor 201 may cause the speaker 103 to output sound indicating that the photography is started. This allows the photography processor 201 to give a notification indicating the start of the photography to the user 2 who is facing left or right and having difficulty in seeing the photography guide UI 300 displayed on the display 102 in front of the user 2. After the photography is completed, the photography processor 201 may also cause the speaker 103 to output sound indicating that the photography is completed.
Also, by using the direction instruction LEDs 23 provided on each auxiliary portion 12, the photography processor 201 may give notifications indicating that the photography is started and the photography is completed. For example, the photography processor 201 may blink all the direction instruction LEDs 23 immediately before the photography is started and may turn off all the direction instruction LEDs 23 after the photography is completed.
The facial-part recognizer 202 determines whether or not the hair covers the area of the forehead in the during-photography-face image. Upon determining that the hair covers the area of the forehead, the facial-part recognizer 202 may display, on the photography guide UI 300, an instruction for fixing the hair. In this case, the photography processor 201 does not have to start the photography.
For photographing the front-view face, the photography processor 201 may adjust the focus of the camera 101 to the positions of both eyes. For photographing the right-side view face, the photography processor 201 may adjust the focus of the camera 101 to the position of the right eye (the eye closer to the camera 101). For photographing the left-side view face, the photography processor 201 may adjust the focus of the camera 101 to the position of the left eye (the eye closer to the camera 101). Adjusting the focus of the camera 101 in such a manner makes it possible to capture a face image that is appropriate for skin analysis, since the eyes are located in the vicinity of the center in the depth direction of the face.
As illustrated in
Also, for capturing a plurality of face images in the same orientation, the photography processor 201 may correct post-photography face images so that the sizes and positions of the face match each other in the post-photography face images. For example, the photography processor 201 captures a first face image with the upper light 13 illuminating the face with horizontally polarized light, captures a second face image with the upper light 13 illuminating the face in the same orientation with vertically polarized light, and captures a third face image without illumination. When the sizes and/or the positions of the face are displaced in the first, second, and third face images, the photography processor 201 corrects the first, second, and third face images so that the sizes and/or the positions of the face match each other. The photography processor 201 may detect displacements among the post-photography images by using a known template matching technique.
A physical configuration example of a skin analysis apparatus 10 according to a second embodiment will be described with reference to
As illustrated in
A right-side end portion 29a of the left auxiliary portion 12a is attached to a position of a left-end portion of the housing 11, the position being located where the height of the camera 101 is between an upper end and a lower end of a left auxiliary mirror 22a. Similarly, the left-side end portion 29b of the right auxiliary portion 12b is attached to a position of a right-end portion of the housing 11, the position being located where the height of the camera 101 is between an upper end and a lower end of the right auxiliary mirror 22b. Each of the left auxiliary portion 12a and the right auxiliary portion 12b is attached to the housing 11 with the angle θmax described above with reference to
The size of the left auxiliary mirror 22a is a size into which the entire front-view face seen in the left auxiliary mirror 22a generally fits when the user 2 turns his or her face to the left. The same applies to the size of the right auxiliary mirror 22b. In other words, in
The left light 14a is provided at a portion that is included in the left-end portion of the housing 11 and that is located below the left auxiliary portion 12a. The right light 14b is provided at a portion that is included in the right-end portion of the housing 11 and that is located below the right auxiliary portion 12b.
As illustrated in
As illustrated in
A physical configuration example of a skin analysis apparatus 10 according to a third embodiment will be described with reference to
As illustrated in
The housing 11 is secured to the pedestal 15. Since the housing 11, the upper light 13, the front mirror 21, the camera 101, and the display 102 are the same as or similar to those in
The left support 16a is secured to a position that is included in the pedestal 15 and that is located at the left side of the housing 11, and extends in a height direction. The left auxiliary portion 12a is secured to an upper end portion of the left support 16a so that the left auxiliary portion 12a and the front mirror 21 form the angle θmax, which is described above with reference to
The left auxiliary portion 12a may also be secured to the left support 16a so that the height of the camera 101 is located between an upper end and a lower end of the left auxiliary mirror 22a. Similarly, the right auxiliary portion 12b may be secured to the right support 16b so that the height of the camera 101 is located between an upper end and a lower end of the right auxiliary mirror 22b. However, the heights of the left auxiliary portion 12a and the right auxiliary portion 12b are not limited to those heights and may be, for example, smaller than the height of the camera 101, as illustrated in
The left light 14a and the right light 14b are respectively provided at the left-end portion and the right-end portion of the housing 11. Compared with the structure illustrated in
A skin analysis apparatus 10 according to the present disclosure includes a housing 11, a camera 101 and a display 102 provided on a major surface of the housing 11, auxiliary portions 12a and 12b having side end portions 29a and 29b attached to respective left-end and right-end portion of the housing 11, and a controller 106 that causes the camera 101 to capture images of the front-view and side-view face of a user 2 and that analyzes skin of the face of the user 2 by using the images. An internal angle θ formed by a major surface of the display 102 and a major surface of each of the auxiliary portions 12a and 12b is an angle at which the camera 101 is capable of capturing an image of the side-view face of the user 2 while the front-view face of the user 2 is seen in the corresponding auxiliary portion 12a or 12b. In other words, for example, the internal angle θ may be an angle at which the camera 101 is capable of photographing at least both eyes of the user 2 (i.e., is capable of seeing both eyes). Alternatively, the internal angle θ may be an angle at which the camera 101 is capable of photographing the contour of a near side and a far side of the face of the user 2 relative to the camera 101. Alternatively, the internal angle θ may be an angle at which the contour of the cheek at a far side of the face of the user 2 relative to the camera 101 is not hidden by the nose. Alternatively, the internal angle θ may be an angle at which the camera 101 is capable of photographing both eyes of the user 2 and a contour of the face of the user 2 from the ear at a near side of the user 2 relative to the camera 101 to the chin of the user 2.
According to the configuration, when the user 2 turns his or her face to one of the auxiliary mirrors 22a and 22b so that his or her front-view face is seen in the auxiliary mirror 22a or 22b, the camera 101 in front of the user 2 can capture an image of his or her side-view face reliably (i.e., in substantially the same orientation). Thus, a skin analysis result of a face image captured in the current photography and a skin analysis result of a face image captured in past photography can be compared with each other with higher accuracy.
The embodiments according to the present disclosure have been described above in detail with reference to the accompanying drawings, and the functions of the skin analysis apparatus 10 can be realized by a computer program.
The reading device 2107 reads a program for realizing the functions of each apparatus described above from a recording medium on which the program is recorded and causes the program to be stored in the storage device 2106. Alternatively, the communication device 2108 communicates with a server apparatus, connected to a network, to download a program for realizing the functions of each apparatus from the server apparatus and causes the downloaded program to be stored in the storage device 2106.
The CPU 2103 copies the program, stored in the storage device 2106, to the RAM 2105, sequentially reads instructions included in the program from the RAM 2105, and executes the instructions to thereby realize the functions of each apparatus.
The present disclosure can be realized by software, hardware, or software that cooperates with hardware.
Each functional block used in the description of each embodiment above can be partly or entirely realized by an LSI, which is an integrated circuit, and each process described in each embodiment above may be controlled partly or entirely by one LSI or a combination of LSIs. The LSI may be individually formed as chips or may be formed by one chip so as to include a part or all of the functional blocks. The LSI may include an input and an output of data. The LSI may be referred to as an IC, a system LSI, a super LSI, or an ultra LSI depending on a difference in the degree of integration.
The technique of the circuit integration is not limited to the LSI and may be realized by using a dedicated circuit, a general-purpose processor, or a special-purpose processor. Also, a field programmable gate array (FPGA) that can be programmed after the manufacture of the LSI or a reconfigurable processor in which the connections and the settings of circuit cells arranged inside the LSI can be reconfigured may be used.
In addition, when a technology for circuit integration that replaces LSI becomes available with the advancement of semiconductor technology or another derivative technology, such a technology may also naturally be used to integrate the functional blocks. Application of biotechnology or the like is possible.
One aspect of the present disclosure is useful for a system that photographs the face.
Number | Date | Country | Kind |
---|---|---|---|
2018-219041 | Nov 2018 | JP | national |