This is a continuation of International Application No. PCT/JP2021/033184 filed on Sep. 9, 2021, which claims priority to Japanese Patent Application No. 2020-156214 filed on Sep. 17, 2020. The entire disclosures of these applications are incorporated by reference herein.
The present disclosure relates to a skin surface analysis device and a skin surface analysis method for analyzing a human skin surface.
On the surface of the human skin (skin surface), there are grooves called skin folds and areas called skin ridges bordered by skin folds. In human skin, a minute amount of sweat droplets are secreted under the resting condition. This sweating secreted at rest is called basal sweating. It is known that basal sweating is secreted mainly at the skin folds, correlates with skin surface hydration, and plays an important role in maintaining the skin's barrier function. Inflammatory skin diseases, such as atopic dermatitis, cholinergic urticaria, prurigo, and lichen amyloidosis, may develop and/or worsen their symptoms by a decrease in the barrier function of the skin due to a basal sweating disturbance. Therefore, the detection method of the patient's basal perspiration would be useful for diagnosis and treatment, as it would provide information for determining the treatment plan.
The impression mold technique (IMT or IM method) is a method for detecting basal sweating, and quantitates sweating function. Dental silicone impression material was applied to the skin surface to form a film after a few minutes. Peeled silicone impression material from the skin transcribes skin surface microstructure and sweating state.
Domestic Republication No. 2018-230733 of PCT International Application describes the technique.
The IMT allows precise transcription of a skin surface microstructure to a silicone material in the form of a film, thereby making it possible to identify skin ridges and measure the area of the skin ridges. The IMT also allows precise transcription of sweat droplets to the silicone material, thereby making it possible to measure the number, diameters, and areas of the sweat droplets. Accordingly, the conditions of the skin surface can be analyzed. Use of this analysis result is advantageous in quantitatively grasping the tendency of an atopic dermatitis patient, for example, having a larger area of skin ridges and a smaller number of sweat droplets than a healthy person.
IMT allows skin ridges and sweat droplets to be distinguished based on an enlarged image of a transcription surface of the silicone material. Specifically, a magnified image of the transcription surface of the silicone material, which is magnified by an optical microscope, is obtained and displayed on a monitor. While viewing the image on the monitor, an inspector identifies skin ridges and skin folds, surrounds and colors the portions corresponding to the skin ridges, and calculates the area of the colored portions. The inspector finds out sweat droplets, colors the portions corresponding to the sweat droplets, and calculates the area of the colored portions. This procedure allows a quantitative grasp of the conditions of the skin surface but has the following problems.
That is, it is not only that a skin surface microstructure is complicated, but also that the skin surface microstructure greatly differs depending on skin diseases from which a patient suffers. Thus, it takes time for an inspector, during distinguishing, to determine where in an image is skin folds or skin ridges, and a limited number of samples can be processed within a certain time. In addition, silicone may contain bubbles which are hardly distinguished from the sweat droplets; thus, distinguishing the sweat droplets is also time and labor consuming work. In addition to a longer time needed for the work of distinguishing between skin folds and skin ridges and distinguishing sweat droplets, there is also a problem of individual variations, such as different distinguishing results due to different abilities of inspectors in making determinations by viewing an image. Longer work may lead to overlooking or other problems.
Even a single person has different numbers of sweat droplets from part to part of the skin surface. An analysis result may be inappropriate unless a part with an average number of sweat droplets is set to be a measurement target. In order to grasp such an average part, distinguishing between the skin folds and the skin ridges and distinguishing the sweat droplets need to be made in a wide range of the skin surface, which is a factor that further increases the time required for the analysis.
The present disclosure was made in view of these problems. It is an objective of the present disclosure to improve the accuracy in analyzing the conditions of a skin surface and to reduce the time required for the analysis.
In order to achieve the objective, a first aspect of the present disclosure is directed to a skin surface analysis device for analyzing a skin surface, using a transcription material to which a human skin surface microstructure is transcribed, the skin surface analysis device including: an image input section to which an image obtained by imaging the transcription material is input; a local image enhancement processor configured to execute local image enhancement processing of enhancing contrast of a local region of the image input to the image input section to generate an enhanced image; a patch image generator configured to divide, into a plurality of patch images, the enhanced image generated by the local image enhancement processor; a machine learning identifier configured to receive the patch images generated by the patch image generator and execute segmentation of each of the patch images received; a whole image generator configured to generate a whole image by combining the patch images segmented and output from the machine learning identifier; a likelihood map generator configured to generate a likelihood map image of skin ridges based on a result of the segmentation from the whole image generated by the whole image generator; a binarization processor configured to execute binarization processing on the likelihood map image generated by the likelihood map generator to generate a binary image; a region extractor configured to extract a skin ridge region based on the binary image generated by the binarization processor; and a skin ridge analyzer configured to calculate an area of the skin ridge region extracted by the region extractor.
According to this configuration, local image enhancement processing is executed on an input image of the transcription material to which a human skin surface microstructure has been transcribed, to generate the enhanced image. This improves the visibility of the details of the image. The image before executing the local image enhancement processing may be a color image or a grayscale image. The enhanced image is divided into a plurality of patch images, each of which is then input to the machine learning identifier and segmented. The segmentation technique for each patch image is a known deep learning technique. This segmentation determines, for example, a category to which each pixel belongs, and categorizes the pixels into a skin ridge, a skin fold, a sweat droplet, and others. A whole image is generated by combining the patch images segmented and output from the machine learning identifier. From the whole image, a likelihood map image of skin ridges is generated based on a result of segmentation. A binary image is generated from the likelihood map image. In a case, for example, where white represents a skin ridge region, a skin ridge region can be distinguished by extracting a white region. The skin surface can be analyzed by calculating the area of the extracted skin ridge region.
In second and third aspects of the present disclosure, the skin surface analysis device may further include: a likelihood map generator configured to generate a likelihood map image of sweat droplets based on a result of the segmentation from the whole image generated by the whole image generator; a sweat droplet extractor configured to extract the sweat droplets based on the likelihood map image generated by the likelihood map generator; and a sweat droplet analyzer configured to calculate a distribution of the sweat droplets extracted by the sweat droplet extractor.
According to this configuration, a whole image is generated by combining the patch images segmented and output from the machine learning identifier. From the whole image, a likelihood map image of sweat droplets is generated based on a result of segmentation. In a case, for example, where white in the likelihood map image represents a sweat droplet, a sweat droplet can be distinguished by extracting a white region. The skin surface can be analyzed by calculating a distribution of the extracted sweat droplets.
In a fourth aspect of the present disclosure, the transcription material is obtained by an impression mold technique, and the skin surface analysis device further includes a grayscale processor configured to convert an image obtained by imaging the transcription material to grayscale.
The IMT allows precise transcription of the skin surface using silicone, which further improves the analysis accuracy. The silicone may be colored in pink, for example. However, according to this configuration, the image of the transcription material is converted to grayscale by the grayscale processor, thereby making it possible to handle the image as a grayscale image suitable for analysis. Accordingly, the processing speed can be increased.
In a fifth aspect of the present disclosure, the patch image generator may generate the patch images so that adjacent ones of the patch images partially overlap each other.
That is, if an image is divided into a plurality of patch images without overlapping adjacent patch images, an edge of a skin ridge or a sweat droplet may happen to overlap the boundary between the adjacent patch images, which may degrade the accuracy in distinguishing the skin ridge or the sweat droplet overlapping the boundary. By contrast, according to this configuration, the adjacent patch images partially overlap each other, allowing a skin ridge or a sweat droplet to be accurately distinguished even at the position described above.
In a sixth aspect of the present disclosure, an input image and an output image of the machine learning identifier may have a same resolution. This configuration allows accurate output of the shape of fine skin ridges and the size of a sweat droplet, for example.
In a seventh aspect of the present disclosure, the skin ridge analyzer sets, on an image, a plurality of grids in a predetermined size and calculates a ratio between the skin ridge region and a skin fold region in each of the grids.
According to this configuration, if, for example, the fineness of a skin surface needs to be evaluated, the fineness of the skin surface can be evaluated based on the ratio between the skin ridge region and the skin fold region in the grid set on a binary image or a grayscale image. A ratio of the skin ridge region equal to or higher than a predetermined value can be used as an index for determining a coarse skin, whereas a ratio of the skin ridge region lower than the predetermined value can be used as an index for determining a fine skin.
In an eighth aspect of the present disclosure, the skin ridge analyzer may convert the ratio between the skin ridge region and the skin fold region in each of the grids into numbers to obtain a frequency distribution (histogram).
In a ninth aspect of the present disclosure, the region extractor may determine, after extracting the skin ridge region, whether each portion of the skin ridge region extracted is raised and divides the skin ridge region by a portion determined to be unraised.
That is, in some state of a disease, there may be a groove formed in a part of a skin ridge. In this case, an unraised portion, that is, a recess is present in the skin ridge region extracted. The skin ridge region divided by this recess is expected to be used for determination on the state of a disease and clinical evaluation.
In a tenth aspect of the present disclosure, the skin surface analysis device further includes an information output section configured to generate and output information on a shape of the skin ridge region extracted by the region extractor. Each piece of information can thus be presented to healthcare practitioners, for example, for use in making a diagnosis or other purposes.
As described above, the present disclosure allows generation of a likelihood map image of a skin surface, using a machine learning identifier, and allows a skin ridge region and sweat droplets to be distinguished, using the likelihood map image. It is therefore possible to eliminate individual variations in analysis and improve the accuracy in analyzing the conditions of the skin surface, and reduce the time required for the analysis.
An embodiment of the present invention will be described in detail below with reference to the drawings. The following description of a preferred embodiment is a mere example in nature, and is not intended to limit the present invention, its application, or its use.
A case will be described in this embodiment where a skin surface is analyzed using the transcription material 100 acquired by the IMT. A human skin surface microstructure may be however transcribed to the transcription material 100 by a method other than the IMT.
The IMT is a method for detecting basal sweating, and quantitates sweating function. Dental silicone impression material was applied to the skin surface to form a film after a few minutes. Peeled silicone impression material from the skin transcribes skin surface microstructure and sweating state. The IMT has been typically used as a method of detecting basal sweating, and a detailed description thereof will thus be omitted. The dental silicone impression material may be colored pink, for example.
On the other hand,
An inspector distinguishes a skin fold, a skin ridge, and sweat droplets from one another in this manner. However, it is not only that a skin surface microstructure is complicated as shown in
The skin surface analysis device 1 according to this embodiment allows generation of a likelihood map image of a skin surface, even based on images such as those shown in
Now, a configuration of the skin surface analysis device 1 will be described in detail. As shown in
The monitor 11 displays various images, user interface images for setting, or other images, and can be a liquid crystal display, for example. The keyboard 12 and the mouse 13 are those typically used as operation means for a personal computer or other devices. In place of or in addition to the keyboard 12 and the mouse 13, a touch panel or other input means may be provided. The main body 10, the monitor 11, and the operation means may be integrated.
As shown in
Although not shown, the controller 10b can be a system LSI, an MPU, a GPU, a DSP, or dedicated hardware, for example, performs numerical calculations and information processing based on various programs, and controls hardware units. The hardware units are connected to each other via an electrical communication path (wire), such as a bus, for unidirectional or bidirectional communication. The controller 10b is configured to perform various processing as will be described later, which can be implemented by a logic circuit or by executing software. The processing executable by the controller 10b include various general image processing. The controller 10b can be obtained by combining hardware and software.
First, a configuration of the controller 10b will be described, and then a skin surface analysis method by the controller 10b will be described with reference to a specific example image.
The controller 10b can take in an image from the outside directly or via the communicator 10a. The image taken in can be stored in the storage 10c. The image to be taken in is an image obtained by imaging the transcription surface of the transcription material 100 magnified by the stereo microscope 101, and serves as a basis for
The controller 10b includes an image input section 20 to which a color image or a grayscale image is input. An image converted to grayscale by a grayscale processor 21, which will be described later, may be input to the image input section 20, or an image converted to grayscale in advance outside the skin surface analysis device 1 may be input to the image input section 20. Similarly to the reading of an image into the grayscale processor 21 described above, an image can be input to the image input section 20 by a user of the skin surface analysis device 1. A color image can be input to the image input section 20.
The controller 10b includes the grayscale processor 21 for converting, if an image taken in is a color image, the color image to grayscale. The color image does not have to be converted to grayscale and may be, as it is, subjected to the local image enhancement processing and subsequent processing, which will be described later.
For example, an image can be taken in by a user of the skin surface analysis device 1. For example, an image magnified by the stereo microscope 101 is captured by an imaging device (not shown) and the thus obtained image data can be read into the grayscale processor 21. In this example, an image of image data output from the imaging device and saved in the JPEG or the PNG format is used. However, the format is not limited thereto. Image data compressed in another compression format or a RAW image may also be used. In this example, an image is in a size of 1600×1200 pixels, but may be in any size.
The grayscale processor 21 converts a color image to grayscale with 8-bit depths, for example. Specifically, the grayscale processor 21 converts an image to an image of pixels whose sample value contains no information other than the luminance. This grayscale is different from a binary image, and expresses an image in colors from white of the strongest luminance to black of the weakest luminance, including gray shades. The depths are not limited to 8 bits, but can be any suitable values.
The controller 10b includes a local image enhancement processor 22. The local image enhancement processor 22 executes local image enhancement processing of enhancing the contrast of a local region of a grayscale image, which has been input to the image input section 20, to generate an enhanced image. This improves the visibility of the details of the image. Examples of the local image enhancement processing include processing, such as histogram equalization, of enhancing the contrast of a local region of an image to improve the visibility of the details.
The controller 10b includes a patch image generator 23. The patch image generator 23 is a section that divides the enhanced image generated by the local image enhancement processor 22 into a plurality of patch images. Specifically, the patch image generator 23 divides an enhanced image in a size of 1600×1200 pixels, for example, into images (i.e., patch images) each in a size of 256×256 pixels. The patch image generator 23 can also generate the patch images so that adjacent patch images partially overlap each other. That is, a patch image generated by the patch image generator 23 partially overlaps the adjacent patch images. The overlapping range can be set to about 64 pixels, for example. This set overlapping range can be referred to as a “64-pixel stride,” for example. The pixel values described above are mere examples and may be any suitable values.
If an image is divided into a plurality of patch images without overlapping adjacent patch images, an edge of a skin ridge or a sweat droplet may happen to overlap the boundary between the adjacent patch images, which may degrade the accuracy in distinguishing the skin ridge or the sweat droplet overlapping the boundary by the machine learning identifier 24, which will be described later. By contrast, this example allows a skin ridge or a sweat droplet to be accurately distinguished even at the position described above, since adjacent patch images partially overlap each other.
The controller 10b includes the machine learning identifier 24. The machine learning identifier 24 is a section that receives the patch images generated by the patch image generator 23 and executes segmentation of each of the input patch images. The machine learning identifier 24 itself segments each input image by a known deep learning technique. Based on this segmentation, the machine learning identifier 24 determines, for example, to which category each pixel belongs and outputs the result as an output image. The machine learning identifier 24 includes an input layer to which an input image is input, an output layer that outputs an output image, and a plurality of hidden layers between the input and output layers. The machine learning identifier 24 learns a large quantity of teacher data to enable automatic extraction of a common feature and flexible determination. The learning has been completed.
In this example, the input and output images of the machine learning identifier 24 have the same resolution. In a case of a typical machine learning identifier, an input image has a higher resolution, and an output image is output at a lower resolution. In this example, however, the resolution of the output image is not reduced because the shape of fine skin ridges, the sizes of sweat droplets, and other factors need to be distinguished accurately. For example, if a patch image in a size of 256×256 pixels is input to the input layer of the machine learning identifier 24, an output image in a size of 256×256 pixels is output from the output layer.
The machine learning identifier 24 in this example can execute the detection of skin ridges and skin folds and the detection of sweat droplets at the same time. Specifically, the machine learning identifier 24 includes a skin ridge and skin fold detector 24a that detects skin ridges and skin folds, and a sweat droplet detector 24b that detects sweat droplets. Each of the skin ridge and skin fold detector 24a and the sweat droplet detector 24b can be constructed using, for example, Unet as a network.
The controller 10b includes a whole image generator 25. The whole image generator 25 is a section that generates a whole image by combining the patch images segmented and output from the machine learning identifier 24. Specifically, the whole image generator 25 combines the patch images output from the skin ridge and skin fold detector 24a into an image like the image before the division to generate a whole image for distinguishing skin ridges and skin folds, and combines the patch images output from the sweat droplet detector 24b in the same manner to generate a whole image for distinguishing sweat droplets. The whole image is in the same size as the image before the division.
The controller 10b includes a likelihood map generator 26. The likelihood map generator 26 is a section that generates a likelihood map image of skin ridges from the whole image for distinguishing skin ridges and skin folds generated by the whole image generator 25 based on a result of the segmentation by the machine learning identifier 24. The likelihood map image is an image color-coded according to the likelihoods of pixels and relatively shows which pixel has a higher likelihood or a lower likelihood. For example, a color map image of pixels of the highest likelihood shown in red, pixels of the lowest likelihood in blue, and pixels therebetween expressed in 8-bit depths can be used as a likelihood map image of skin ridges and skin folds. This display format is a mere example and may be grayscale or a display format with different lightness, and may have depths other than 8 bits.
The likelihood map generator 26 generates a likelihood map image of sweat droplets from the whole image for distinguishing sweat droplets generated by the whole image generator 25 based on a result of the segmentation by the machine learning identifier 24. A color map image of pixels of the highest likelihood of a sweat droplet shown in red, pixels of the lowest likelihood of a sweat droplet in blue, and pixels therebetween expressed in 8-bit depths can be used as a likelihood map image of sweat droplets. Similarly to the case of the skin ridges and skin folds, the likelihood map image of sweat droplets may be displayed in grayscale, a display format with different lightness, and may have depths other than 8 bits.
The controller 10b has a binarization processor 27. The binarization processor 27 is a section that executes binarization processing on the likelihood map image, which has been generated by the likelihood map generator 26, to generate a binary image (i.e., a black and white image). The threshold Th used in the binarization processing may be set to be any value. For example, Th can be set to 150 (Th=150) in the case of 8-bit depths. It is possible to distinguish between skin folds and skin ridges by determining, for example, a black portion to be skin folds and a white portion to be skin ridges, using a likelihood map image based on a whole image for distinguishing skin ridges and skin folds. It is also possible to distinguish between sweat droplets and portions other than sweat droplets by determining, for example, white portions to be sweat droplets and black portions to be portions other than sweat droplets, based on a whole image for distinguishing sweat droplets.
The controller 10b includes a region extractor 28. The region extractor 28 is a section that extracts a skin ridge region based on a binary image generated by the binarization processor 27. Specifically, if white portions represent skin ridges in the binary image, a group of white pixels in the binary image is extracted as a skin ridge region. The region extractor 28 may extract a skin fold region based on a binary image generated by the binarization processor 27. In this case, if black portions represent skin folds in the binary image, a group of black pixels in the binary image is extracted as a skin fold region. The region extractor 28 may extract skin folds and thereafter extract the other region as the skin ridge region. Alternatively, the region extractor 28 may extract skin ridges and thereafter extract the other region as the skin fold region. As described below in the skin ridge analyzer 30, a grayscale image, in which a skin ridge is close to white, and a skin fold is close to black, can be used to observe the condition of the skin surface. In this case, the skin folds are represented by a luminance value close to black (0 for 8-bit images) and the skin ridges are represented by a luminance value close to white (255 for 8-bit images), allowing quantitative representation of the distribution and changes in the skin folds and skin ridges.
The controller 10b includes a sweat droplet extractor 29. The sweat droplet extractor 29 is a section that extracts sweat droplets based on a likelihood map image of sweat droplets.
Specifically, if white (or red) portions in the likelihood map image of sweat droplets represent sweat droplets, a group of white (or red) pixels in the likelihood map image of sweat droplets is extracted as sweat droplets. The sweat droplet extractor 29 may extract portions other than sweat droplets, based on a likelihood map image of sweat droplets. In this case, if black (or blue) portions in the likelihood map image of sweat droplets represent other portions than sweat droplets, a group of black (or red) pixels in the likelihood map image of sweat droplets is extracted as other portions than sweat droplets. The sweat droplet extractor 29 may extract portions other than sweat droplets from the likelihood map image of sweat droplets and thereafter extract other portions as sweat droplets.
The transcription material 100 may contain bubbles, which may be erroneously distinguished as sweat droplets. In this case, a distinguishing method using dimensions is also applied. For example, a threshold for distinguishing is set to “40 μm” as an example. A small region with a diameter of 40 μm or less is distinguished as a bubble, and only a region with a diameter over 40 μm is distinguished as a sweat droplet. Another example of the threshold for distinguishing is an area. For example, the area of a circle with a diameter of 40 μm is obtained in advance. A small region with an area equal to or smaller than that area is distinguished as a bubble, and only a region with an area greater than that area is distinguished as a sweat droplet. The “diameter” may be, for example, a longitudinal diameter in a case of an elliptic approximation.
The controller 10b includes a skin ridge analyzer 30. The skin ridge analyzer 30 is a section that calculates the area of a skin ridge region extracted by the region extractor 28. The skin ridge analyzer 30 can grasp the shape of a skin ridge by, for example, generating an outline surrounding a skin ridge region extracted by the region extractor 28. The skin ridge analyzer 30 can calculate the area of the skin ridges by obtaining the area of the region surrounded by the outline of the skin ridge. The skin ridge analyzer 30 can also grasp the shape of a skin fold by generating, for example, an outline surrounding a skin fold region extracted by the region extractor 28. The skin ridge analyzer 30 can also calculate the area of the skin folds by obtaining the area of the region surrounded by the outline of the skin fold.
The skin ridge analyzer 30 sets a plurality of grids in a predetermined size on a binary image or a grayscale image, and calculates the ratio between the skin ridge region and the skin fold region in each grid. Specifically, as an example, assume that a grid is set to divide a binary image into nine equal images, namely, first to ninth divisional images. In this case, the skin ridge analyzer 30 calculates the areas of the skin ridge region and the skin fold region included in each divisional image to obtain the ratio between the areas of the skin ridge region and the skin fold region. If, for example, the fineness of a skin surface needs to be evaluated, the fineness of the skin surface can be evaluated based on the ratio between the skin ridge region and the skin fold region in the grid set on a binary image or a grayscale image. A ratio of the skin ridge region higher than or equal to a predetermined value can be a criterion for determining a coarse skin. On the other hand, a ratio of the skin ridge region lower than the predetermined value can be a criterion for determining a fine skin.
Used in the following description of the embodiment is a result of analysis of skin ridges and skin folds (in which a skin ridge is close to white, and a skin fold is close to black) on a grayscale image using the skin ridge analyzer 30. A healthy person has a skin surface with a clear boundary between a skin ridge and a skin fold, which allows measurement of the area of the skin ridge. On the other hand, an atopic dermatitis patient may have a skin surface with an unclear boundary between a skin ridge and a skin fold. In this case, a grayscale image is used as it is for analysis; the ratios between the skin ridge and the skin fold in a plurality of grid are obtained; and grayscale values of the pixels in the grids are used to analyze the ratios between the skin ridge and the skin fold, and the analysis result is displayed in a histogram, which can be used as criteria for determining the fineness or other characteristics of the skin (which will be described later).
The skin ridge analyzer 30 converts the ratio between the skin ridge region and the skin fold region in each grid into numbers to calculate a frequency distribution. Specifically, the skin ridge analyzer 30 calculates the ratios between the areas of the skin ridge region and the skin fold region, and then converts the ratios into numbers and summarizes the data in the form of a frequency distribution table. In addition, a skin ridge analyzer 30 can calculate the center of gravity of each skin ridge region, and a perimeter length, rectangular approximation, elliptic approximation, circularity, aspect ratio, density, and other characteristics of the skin ridge region.
In some state of a disease, there may be a groove formed in a part of a skin ridge. In this case, an unraised portion, that is, a recess is present in the skin ridge region extracted. Dividing the skin ridge region by this recess can serve as a criterion in determining the state of the disease and making a clinical evaluation. To make this happen, the skin ridge analyzer 30 determines, after extracting the skin ridge region, whether each portion of the extracted skin ridge region is raised and divides the skin ridge region by a portion determined to be unraised. For example, a skin ridge region may include a groove-like portion. In this case, the skin ridge region is not fully raised but partially (i.e., the groove-like portion is) recessed. The portion determined to be unraised, that is, the portion determined to be a recess is the groove-like portion which divides a single skin ridge region into a plurality of skin ridge regions.
The controller 10b includes the sweat droplet analyzer 31. The sweat droplet analyzer 31 calculates a distribution of the sweat droplets extracted by the sweat droplet extractor 29. The sweat droplet analyzer 31 can calculate, for example, the number of sweat droplets per unit area (e.g., 1 mm2 or 1 cm2) of a skin surface, the size (i.e., the diameter) of each sweat droplet, the area of the sweat droplet, and other factors. The sweat droplet analyzer 31 can also calculate the total area of the sweat droplets per unit area of a skin surface.
The controller 10b includes an information output section 32. The information output section 32 generates and outputs information on the shape of a skin ridge region extracted by the region extractor 28 and information on sweat droplets extracted by the sweat droplet extractor 29. The information on the shape of a skin ridge region includes results of calculation by the skin ridge analyzer 30. Examples may include the area of a skin ridge region, the center of gravity the skin ridge region, and a perimeter length, rectangular approximation, elliptic approximation, circularity, aspect ratio, density, and other characteristics of the skin ridge region. On the other hand, the information on sweat droplets includes results of calculation by the sweat droplet analyzer 31. Examples may include the number of sweat droplets per unit area, the total area of the sweat droplets per unit area, and other characteristics.
Next, a skin surface analysis method using the skin surface analysis device 1 configured as described above will be described with reference to specific example images. The flow of the skin surface analysis method is as shown in the flowcharts of
The process then proceeds to step S2. In step S2, the transcription material 100 is set in the stereo microscope 101 and observed at a predetermined magnification, and the observed field of view is imaged by an imaging device. In this manner, a color image (1600×1200 pixels) is obtained in the JPEG or the PNG format. Subsequently, the process proceeds to step S3, in which the color image captured by the imaging device is read into the controller 10b of the skin surface analysis device 1. The process then proceeds to step S4, in which the grayscale processor 21 (shown in
In the following step S5, the grayscale image is input to the image input section 20. This step corresponds to “image input.” Then, in step S6, the local image enhancement processor 22 executes local image enhancement processing on the grayscale image that is input in step S5. This step corresponds to “local image enhancement.”
The process then proceeds to step S7. In step S7, the patch image generator 23 divides the enhanced image generated in step S6 into a plurality of patch images.
After generating the patch images, the process proceeds to step S8. In step S8, the patch images generated in step S7 are input to the machine learning identifier 24 which executes segmentation of the input patch images. At this time, the same patch images are input to both the skin ridge and skin fold detector 24a and the sweat droplet detector 24b (steps S9 and S10). This step corresponds to “segmentation.”
Specifically, as shown in
In this example, as described above, in dividing the enhanced image into a plurality of patch images in step S7, adjacent patch images are overlapped with each other. If the adjacent patch images do not overlap each other, an edge of a skin ridge or a sweat droplet may happen to overlap the boundary between the adjacent patch images, which may degrade the accuracy in distinguishing the skin ridge or the sweat droplet overlapping the boundary. By contrast, in this example, the adjacent patch images partially overlap each other, allowing a skin ridge or a sweat droplet to be accurately distinguished even at the position described above.
After that, the process proceeds to step S11, in which the skin ridge and skin fold output images (i.e., patch images) after step S9 are combined to generate a whole image as shown in
Subsequently, the process proceeds to step S12 shown in
After generating the likelihood map image of skin ridges and the likelihood map image of sweat droplets, the process proceeds to step S13. In step S13, binarization processing is executed on the likelihood map image of skin ridges, which has been generated in step S12, to generate a binary image. This step is executed by the binarization processor 27 and corresponds to the “binarization processing”.
After that, the process proceeds to step S14, in which the region extractor 28 extracts a skin ridge region based on the binary image generated in step S13. At this time, a skin fold region may be extracted.
The process proceeds to step S15, in which the sweat droplet extractor 29 extracts sweat droplets based on the likelihood map image of sweat droplets generated in step S12. This step corresponds to the “sweat droplet extraction.”
The process then proceeds to step S16. In step S16, comparison is made between the positions of the sweat droplets and the skin ridges and skin folds. The positions and ranges of the sweat droplets can be specified by XY coordinates on the image. The positions and ranges of skin ridges and skin folds can also be specified by the XY coordinates on the image. The image for specifying the positions and ranges of sweat droplets and the image for specifying the positions and ranges of skin ridges and skin folds are originally the same; thus, the sweat droplets can be placed on the image showing skin ridges and skin folds as shown in
The process then proceeds to step S17. In step S17, the sweat droplets in skin ridges and skin folds are identified.
After the identification, the process proceeds to step S18 and step S19. Either step S18 or S19 may be performed first. In step S18, a histogram showing skin ridge information is created and displayed on the monitor 11. First, the skin ridge analyzer 30 calculates the respective areas of the skin ridge regions extracted in step S14. Then, as shown in
In step S19, a heat map image of sweat droplets is created and displayed on the monitor 11. First, the sweat droplet analyzer 31 calculates a distribution of sweat droplets extracted in step S15. For example, as shown in
The creation of a heat map image is also advantageous in determining, as a pattern, the sweating and conditions of skin ridges in a small area, which cannot be obtained from individual analysis areas or cannot be determined even from a wide area if the entire area is averaged. Heat map images may be arranged in time series and displayed on the monitor 11. For example, heat map images are generated when one week, two weeks, and three weeks have elapsed since the start of treatment of an atopic dermatitis patient, and are displayed in the form of a list, thereby making it possible to determine whether the symptom improves and make quantitative determination on the progress.
In the table shown in
These indices, too, can contribute to distinguishing the fineness of a skin surface. It is thus possible to distinguish the fineness of a skin surface using the machine learning identifier 24. Further, as shown in
The skin ridge analyzer 30 can also arrange images, such as the image shown in
(Quantification of Fineness of Skin Based on Ratio between Skin Ridges and Skin Folds)
A fine skin, such as a skin of a forearm of a healthy person, has a distribution with a peak at a central portion in any grid in the pixel size of 100×100, 150×150, 200×200, or 250×250. In addition, since the ratio between skin ridges and skin folds is known, it is possible to quantify, based on the grid size, not only the size of the skin ridges but also the size of the skin folds.
Next, the cases of an atopic dermatitis patient will be described.
As described above, this embodiment allows generation of a likelihood map image of a skin surface, using the machine learning identifier 24, and allows a skin ridge region and sweat droplets to be distinguished, using the likelihood map image. It is therefore possible to eliminate individual variations in analysis and improve the accuracy in analyzing the conditions of the skin surface, and reduce the time required for the analysis.
The embodiment described above is a mere example in all respects and should not be interpreted as limiting. All modifications and changes belonging to the equivalent scope of the claims fall within the scope of the present invention.
As described above, the skin surface analysis device and the skin surface analysis method according to the present invention can be used to analyze a human skin surface, for example.
Number | Date | Country | Kind |
---|---|---|---|
2020-156214 | Sep 2020 | JP | national |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2021/033184 | Sep 2021 | US |
Child | 18120366 | US |