The present disclosure relates to a method and an apparatus for evaluating skin condition.
The growing awareness of anti-aging has increased the importance of evaluating skin condition. This is because the evaluation of skin condition helps to improve skin condition. For example, International Publication No. 2016/080266 discloses a method for evaluating a skin blemish. In this method for evaluating a skin blemish, the regions of blemishes are extracted through image processing from the entirety of an RGB image of the skin of a user, and the number of blemishes and the areas and densities of the blemishes are quantitatively evaluated on the basis of the regions of the extracted blemishes.
In recent years, image capturing apparatuses have been developed that can acquire image information at more wavelengths than RGB images. U.S. Pat. No. 9,599,511 discloses an image capturing apparatus that uses compressed-sensing technology to obtain a hyperspectral image of an object.
One non-limiting and exemplary embodiment provides a technology for reducing the processing load in evaluation of skin condition.
In one general aspect, the techniques disclosed here feature a method for evaluating a user's skin condition, the method being performed by a computer. The method includes acquiring image data concerning part of the user's body and including information for four or more bands, determining an evaluation region in an image representing the part of the user's body in accordance with an input from the user, and generating, based on the image data, and outputting evaluation data representing an evaluation result of skin condition in the evaluation region.
A general or specific embodiment according to the present disclosure may be realized by a system, an apparatus, a method, an integrated circuit, a computer program, or a computer readable recording medium such as a recording disc or by a combination of some or all of a system, an apparatus, a method, an integrated circuit, a computer program, and a recording medium. Examples of the computer readable recording medium include nonvolatile recording media such as a compact disc read-only memory (CD-ROM), a digital versatile disc (DVD), and a Blu-ray Disc (BD). The apparatus may be formed by one or more devices. In a case where the apparatus is formed by two or more devices, the two or more devices may be arranged in one apparatus or may be arranged in two or more separate apparatuses in a divided manner. In the present specification and the claims, an “apparatus” may refer not only to one apparatus but also to a system formed by apparatuses. The apparatuses included in the “system” may include an apparatus installed at a remote place away from the other apparatuses and connected to the other apparatuses via a communication network.
According to a technology of the present disclosure, the processing load in evaluation of skin condition can be reduced.
It should be noted that general or specific embodiments may be implemented as a system, a method, an integrated circuit, a computer program, a storage medium, or any selective combination thereof.
Additional benefits and advantages of the disclosed embodiments will become apparent from the specification and drawings. The benefits and/or advantages may be individually obtained by the various embodiments and features of the specification and drawings, which need not all be provided in order to obtain one or more of such benefits and/or advantages.
In the present disclosure, all or some of circuits, units, devices, members, or portions or all or some of the functional blocks of a block diagram may be executed by, for example, one or more electronic circuits including a semiconductor device, a semiconductor integrated circuit (IC), or a large-scale integration circuit (LSI). The LSI or the IC may be integrated onto one chip or may be formed by combining chips. For example, functional blocks other than a storage device may be integrated onto one chip. In this case, the term LSI or IC is used; however, the term to be used may change depending on the degree of integration, and the term “system LSI”, “very large-scale integration (VLSI)”, or “ultra-large-scale integration (ULSI)” may be used. A field-programmable gate array (FPGA) or a reconfigurable logic device that allows reconfiguration of interconnection inside an LSI or setup of a circuit section inside an LSI can also be used for the same purpose, the FPGA and the reconfigurable logic device being programmed after the LSIs are manufactured.
Furthermore, functions or operations of all or some of the circuits, the units, the devices, the members, or the portions can be executed through software processing. In this case, software is recorded in one or more non-transitory recording mediums such as a read-only memory (ROM), an optical disc, or a hard disk drive, and when the software is executed by a processing device (a processor), the function specified by the software is executed by the processing device (the processor) and peripheral devices. The system or the apparatus may have the one or more non-transitory recording mediums in which the software is recorded, the processing device (the processor), and a hardware device to be needed such as an interface.
In the following, exemplary embodiments of the present disclosure will be described. Note that any one of embodiments to be described below is intended to represent a general or specific example. Numerical values, shapes, constituent elements, arrangement positions and connection forms of the constituent elements, steps, and the order of steps are examples, and are not intended to limit the present disclosure. Among the constituent elements of the following embodiments, constituent elements that are not described in independent claims representing the most generic concept are described as optional constituent elements. Each drawing is a schematic diagram and is not necessarily precisely illustrated. Furthermore, in each drawing, substantially the same or similar constituent elements are denoted by the same reference signs. Redundant description may be omitted or simplified.
First, an example of a hyperspectral image will be briefly described with reference to
In the example illustrated in
Next, an example of a method for generating a hyperspectral image will be briefly described. A hyperspectral image can be acquired through imaging performed using, for example, a spectroscopic element such as a prism or a grating. In a case where a prism is used, when reflected light or transmitted light from an object passes through the prism, the light is emitted from a light emission surface of the prism at an emission angle corresponding to the wavelength of the light. In a case where a grating is used, when reflected light or transmitted light from the object is incident on the grating, the light is diffracted at a diffraction angle corresponding to the wavelength of the light. A hyperspectral image can be obtained by separating, using a prism or a grating, light from the object into bands and detecting the separated light on a band basis.
A hyperspectral image can also be acquired using a compressed-sensing technology disclosed in U.S. Pat. No. 9,599,511. In the compressed-sensing technology disclosed in U.S. Pat. No. 9,599,511, light that has passed through a filter array referred to as an encoder and is then reflected by an object is detected by an image sensor. The filter array includes filters arranged two-dimensionally. These filters have transmission spectra unique thereto in a respective manner. Through imaging using such a filter array, one two-dimensional image into which image information regarding bands is compressed is obtained as a compressed image. In the compressed image, the spectrum information regarding the object is compressed and recorded as one pixel value per pixel.
A hyperspectral image can be reconstructed from a compressed image using data representing the spatial distribution of luminous transmittance of each band of the filter array. For reconstruction, compressed-sensing technology is used. The data used in reconstruction processing and representing the spatial distribution of luminous transmittance of each band of the filter array is referred to as a “reconstruction table”. In the compressed-sensing technology, a prism or a grating does not need to be used, and thus a hyperspectral camera can be miniaturized. Furthermore, in the compressed-sensing technology, the amount of data processed by a processing circuit can be reduced by using a compressed image.
Next, a method for reconstructing a hyperspectral image from a compressed image using a reconstruction table will be described. Compressed image data g acquired by the image sensor, a reconstruction table H, and hyperspectral image data f satisfy Eq. (1) below.
g=Hf (1)
In this case, the compressed image data g and the hyperspectral image data f constitute vector data, and the reconstruction table H is matrix data. When the number of pixels of the compressed image data g is denoted by Ng, the compressed image data g is expressed as a one-dimensional array, that is, a vector having Ng elements. When the number of pixels of the hyperspectral image data f is denoted by Nf, and the number of bands is denoted by M, the hyperspectral image data f is expressed as a one-dimensional array, that is, a vector having (Nf×M) elements. The reconstruction table H is expressed as a matrix having elements of Ng rows and (Nf×M) columns. Ng and Nf can be designed to have the same value.
When the vector g and the matrix H are given, it seems that f can be calculated by solving an inverse problem of Eq. (1). However, the number of elements (Nf×M) of the data f to be obtained is greater than the number of elements Ng of the acquired data g, and thus this problem is an ill-posed problem, and the problem cannot be solved as it is. Thus, the redundancy of the images included in the data f is used to obtain a solution using a compressed-sensing method. Specifically, the data f to be obtained is estimated by solving the following Eq. (2).
In this case, f denotes estimated data of the data f. The first term in the braces of the equation above represents a shift between an estimation result Hf and the acquired data g, which is a so-called residual term. In this case, the sum of squares is treated as the residual term; however, an absolute value, a root-sum-square value, or the like may be treated as the residual term. The second term in the braces is a regularization term or a stabilization term, which will be described later. Eq. (2) means to obtain f that minimizes the sum of the first term and the second term. The processing circuit can cause a solution to converge through a recursive iterative operation and can calculate the final solution f.
The first term in the braces of Eq. (2) refers to a calculation for obtaining the sum of squares of the differences between the acquired data g and Hf, which is obtained by performing a system conversion on f in the estimation process using the matrix H. The second term Φ(f) is a constraint for regularization of f and is a function that reflects sparse information regarding the estimated data. This function provides an effect in that estimated data is smoothed or stabilized. The regularization term can be expressed using, for example, discrete cosine transformation (DCT), wavelet transform, Fourier transform, or total variation (TV) of f. For example, in a case where total variation is used, stabilized estimated data can be acquired in which the effect of noise of the data g, observation data, is suppressed. The sparsity of the object in a space of each regularization term differs with the texture of the object. A regularization term having a regularization term space in which the texture of the object becomes sparser may be selected. Alternatively, regularization terms may be included in calculation. T is a weighting factor. The greater the weighting factor r, the greater the amount of reduction of redundant data, thereby increasing a compression rate. The smaller the weighting factor r, the lower the convergence to the solution. The weighting factor T is set to an appropriate value with which f is converged to a certain degree and is not compressed too much.
A more detailed method for acquiring a hyperspectral image using a compressed-sensing technology is disclosed in U.S. Pat. No. 9,599,511. The entirety of the disclosed content of U.S. Pat. No. 9,599,511 is incorporated herein by reference. Note that a method for acquiring a hyperspectral image through imaging is not limited to the above-described method using compressed sensing. For example, a hyperspectral image may be acquired through imaging using a filter array in which pixel regions including four or more filters having different transmission wavelength ranges from each other are arranged two-dimensionally. Alternatively, a hyperspectral image may be acquired using a spectroscopic method using a prism or a grating.
A hyperspectral camera can evaluate skin condition more accurately than a typical RGB camera. In contrast, hyperspectral image data may cause a high processing load because it contains image information for many bands. By using a method for evaluating skin condition according to an embodiment of the present disclosure, such a processing load can be reduced. In a method according to an embodiment of the present disclosure, skin condition in an evaluation region of a part of a user's body is evaluated on the basis of image data concerning the part of the user's body. The image data includes information for four or more bands. Such image data may be, for example, compressed image data or hyperspectral image data. The evaluation region is determined in accordance with an input from the user. Unlike in a method described in International Publication No. 2016/080266, the entirety of image data needs not be processed to extract a specific region in the method according to the present embodiment. As a result, it is possible to reduce the processing load in evaluation of skin condition. In the following, a method and an apparatus for evaluating skin condition according to embodiments of the present disclosure will be described.
A method according to a first aspect is a method for evaluating a user's skin condition, the method being performed by a computer. The method includes acquiring image data concerning part of the user's body and including information for four or more bands, determining an evaluation region in an image representing the part of the user's body in accordance with an input from the user, and generating, based on the image data, and outputting evaluation data representing an evaluation result of skin condition in the evaluation region.
By using this method, it is possible to reduce the processing load in the evaluation of skin condition.
By using the method, evaluation data representing an evaluation result of skin condition in only the evaluation region may be generated, on the basis of the image data, and output. As a result, a target for which evaluation data is to be generated can be narrowed down, and the processing load in evaluation of skin condition can further be reduced.
Moreover, in the method, the evaluation data may exclude an evaluation result of skin in a different region from the evaluation region. As a result, a target for which evaluation data is to be generated can be narrowed down, and the processing load in evaluation of skin condition can further be reduced.
A method according to a second aspect is the method according to the first aspect that further includes determining a base region located at a different position from the evaluation region in the image. The evaluation result includes a comparison result between the skin condition in the evaluation region and skin condition in the base region.
By using this method, the skin conditions in two different regions can be compared with each other.
A method according to a third aspect is the method according to the first aspect that further includes treating the skin condition in the evaluation region as current skin condition in the evaluation region, and acquiring data indicating past skin condition in the evaluation region. The evaluation result includes a comparison result between the current skin condition in the evaluation region and the past skin condition in the evaluation region.
By using this method, the current and past skin conditions in the evaluation region can be compared with each other.
A method according to a fourth aspect is the method according to any one of the first to third aspects in which the acquiring the image data includes acquiring compressed image data. The compressed image data is obtained by compressing image information regarding the part of the user's body for the four or more bands into one image.
By using this method, the amount of data in processing of image data can be reduced.
A method according to a fifth aspect is the method according to the fourth aspect that further includes generating, for the evaluation region, partial-image data corresponding to at least one band among the four or more bands from the image data. The generating and outputting includes generating, based on the partial-image data, and outputting the evaluation data.
By using this method, the processing load can be reduced.
A method according to a sixth aspect is the method according to the fifth aspect in which the compressed image data is acquired by imaging the part of the user's body through a filter array. The filter array has filters arranged two-dimensionally. Transmission spectra of at least two or more filters among the filters are different from each other. The generating the partial-image data includes generating the partial-image data using at least one reconstruction table corresponding to the at least one band. The reconstruction table represents a spatial distribution of luminous transmittance of each band for the filter array in the evaluation region.
By using this method, partial-image data can be generated.
A method according to a seventh aspect is the method according to any one of the first to sixth aspects that further includes causing a display device to display a graphical user interface for the user to specify the evaluation region.
By using this method, the user can specify an evaluation region through the graphical user interface.
A method according to an eighth aspect is the method according to the seventh aspect in which the graphical user interface displays the image representing the part of the user's body.
By using this method, the user can specify an evaluation region while viewing an image representing part of his or her body.
A method according to a ninth aspect is the method according to any one of the first to eighth aspects in which the image includes information for one or more, but no more than three, bands.
By using this method, a monochrome image or an RGB image can be used, for example, as an image representing the part of the user's body.
A method according to a tenth aspect is the method according to the ninth aspect in which the image is generated based on the image data.
By using this method, a monochrome image or an RGB image, for example, can be generated from compressed image data or hyperspectral image data.
A method according to an eleventh aspect is the method according to the ninth aspect in which the image data is treated as first image data and that further includes acquiring second image data concerning the part of the user's body and including information for one or more, but no more than three, bands. The image is an image represented by the second image data.
By using this method, the processing load can be reduced by separately acquiring, for example, a monochrome image or an RGB image representing the part of the user's body.
A method according to a twelfth aspect is the method according to any one of the first to eleventh aspects in which the skin condition is a state of a blemish.
By using this method, the state of the blemish can be evaluated.
A processing apparatus according to a thirteenth aspect includes a processor, and a memory in which a computer program that the processor executes is stored. The computer program causes the processor to perform acquiring image data concerning part of a user's body and including information for four or more bands, determining an evaluation region in the part of the user's body in an image representing the part of the user's body in accordance with an input from the user, and generating, based on the image data, and outputting evaluation data representing an evaluation result of skin condition in the evaluation region.
In this processing apparatus, the processing load in the evaluation of skin condition can be reduced.
A computer program according to a fourteenth aspect causes a computer to perform acquiring image data concerning part of a user's body and including information for four or more bands, determining an evaluation region in the part of the user's body in an image representing the part of the user's body in accordance with an input from the user, and generating, based on the image data, and outputting evaluation data representing an evaluation result of skin condition in the evaluation region.
With this computer program, the processing load in the evaluation of skin condition can be reduced.
First, with reference to
The face 10 is irradiated with light emitted from a light source for evaluation or ambient light. Light emitted from the light source for evaluation or ambient light may include, for example, visible light or may include visible light and near-infrared rays.
The hyperspectral camera 20 captures an image of the face 10 by detecting reflected light generated by the face 10 as a result of light irradiation. A dashed arrow illustrated in
The storage device 30 stores a reconstruction table for a filter array used in compressed-sensing technology and data generated in the process of evaluating skin condition. The reconstruction table is a reconstruction table corresponding to all bands for the entire region. Alternatively, the reconstruction table is a reconstruction table corresponding to some of the bands for the entire region, a reconstruction table corresponding to all bands for part of the region, or a reconstruction table corresponding to some of the bands for part of the region. In the following description, the reconstruction table corresponding to all bands for the entire region among the above-described reconstruction tables is referred to as a “full-reconstruction table”, and the other reconstruction tables are referred to as “partial-reconstruction tables”. The storage device 30 includes, for example, any storage medium such as a semiconductor memory, a magnetic storage device, or an optical storage device.
The processing circuit 50 acquires compressed image data from the hyperspectral camera 20 and acquires the reconstruction table from the storage device 30. The processing circuit 50 generates, on the basis of these acquired data, partial-image data corresponding to at least one band for a partial region of the face 10. “Partial-image data” refers to part of hyperspectral image data. The hyperspectral image data has three-dimensional image information based on a two-dimensional space and wavelengths. “Part” may be part of the space or part of a wavelength axis. The processing circuit 50 evaluates, on the basis of the partial-image data, skin condition in the partial region of the face 10 and causes the display device 40 to display an evaluation result. The partial region may be, for example, a blemish 11 on the face 10. Details of an evaluation method will be described below.
Regarding generation of partial-image data, the processing circuit 50 may reconstruct hyperspectral image data from compressed image data using the full-reconstruction table and extract the above-described partial-image data from the hyperspectral image data. Alternatively, the processing circuit 50 may generate a partial-reconstruction table from the full-reconstruction table and generate, using the partial-reconstruction table, partial-image data from compressed image data in a selective manner. The partial-reconstruction table includes a luminous transmittance for each pixel for at least one band and a luminous transmittance for each pixel for a band obtained by combining the other bands in a partial region. The luminous transmittance for each pixel for the band obtained by combining the other bands is a luminous transmittance obtained by adding the luminous transmittances of the other bands or the average of the obtained luminous transmittances. Generation of the partial-image data in a selective manner can reduce the processing load, compared with reconstruction of the hyperspectral image data. Details of this selective generation method are disclosed in Japanese Patent Application No. 2020-056353.
As described below, in a case where RGB image data is to be generated from compressed image data, the processing circuit 50 may reconstruct hyperspectral image data from compressed image data and generate RGB image data from the hyperspectral image data. Alternatively, the processing circuit 50 may generate a partial-reconstruction table corresponding to bands for RGB from the full-reconstruction table and generate, using the partial-reconstruction table, RGB image data from the compressed image data in a selective manner.
A computer program that the processing circuit 50 executes is stored in the memory 52 such as a read-only memory (ROM) or a random access memory (RAM). In this specification, an apparatus that includes the processing circuit 50 and the memory 52 is also referred to as a “processing apparatus”. The processing circuit 50 and the memory 52 may be integrated on a single circuit board or may be provided on separate circuit boards.
The display device 40 displays a graphical user interface (GUI) for the user to specify a region to be used to evaluate skin condition from within the face 10. Furthermore, the display device 40 displays evaluation results. The display device 40 may be, for example, a display of a mobile terminal or a personal computer.
The evaluation apparatus 100 may further include a gyroscope to compensate for camera shake during image capturing in addition to the above-described configuration and may further include an output device for instructing the user to perform image capturing. The output device may be, for example, a speaker or a vibrator.
A skin condition evaluation method according to the first embodiment is different between the first session and the second and subsequent sessions. In an evaluation in the first session, regions with blemishes of concern and an ideal region without blemishes in the face 10 are registered. In the following description, a region with a blemish of concern is referred to as an “evaluation region”, and an ideal region without blemishes is referred to as a “base region”. The evaluation regions and the base region are located at different positions from each other. The number of evaluation regions may be greater than or equal to one, and the number of base regions may be greater than or equal to one. After registering the evaluation regions and the base region, skin condition in the evaluation regions is evaluated. In evaluations in the second and subsequent sessions, the current evaluation regions are determined on the basis of the evaluation regions registered in the first session, and skin condition in the evaluation regions is evaluated. Skin condition may be evaluated on a daily, weekly, monthly, or yearly cycle, for example. Since the skin turnover cycle is about 28 days, skin condition may be evaluated on the basis of this cycle. Evaluation Method in First Session
In the following, an evaluation method in the first session will be described with reference to
When the application is started, the processing circuit 50 causes a speaker to output the following audio. The audio is, for example, “Please capture images of your face continuously from the left side to the right side. Please observe the captured images of the face, and if there is a blemish of concern, touch the blemish with a touch pen. Furthermore, please press and hold the touch pen on ideal skin without blemishes”. The processing circuit 50 acquires data indicating the date and time at which an operation was started for evaluation. In the following description, the date and time is referred to as a “start date and time”.
As illustrated in
As illustrated in
Next, with the hyperspectral camera 20 of the evaluation apparatus 100 oriented so as to face the face 10 from the front, the user touches the image capture button displayed by the display device 40 with the touch pen 42. As illustrated in
Next, with the hyperspectral camera 20 of the evaluation apparatus 100 oriented so as to face the face 10 from the front right, the user touches the image capture button displayed by the display device 40 with the touch pen 42. As illustrated in
In the above-described example, the images of the face 10 are captured from three angles: the front left, the front, and the front right. Images of the face 10 may be captured from many more different angles from left to right, or from above or below.
As illustrated in
In a case where the user touches the confirm button, the processing circuit 50 receives a confirmation signal and causes the display device 40 to display a two-dimensional composite image obtained by connecting the images of the left, front, and right sides of the face 10 as illustrated in
As illustrated in
The processing circuit 50 causes the speaker to output audio instructing the user to start image capturing. The processing circuit 50 acquires data indicating the start date and time of the first session.
The processing circuit 50 receives an image capturing signal and causes the hyperspectral camera 20 to capture an image of the face 10. The hyperspectral camera 20 generates and outputs compressed image data of the face 10.
The processing circuit 50 acquires the compressed image data and generates RGB image data of the face 10 using the reconstruction table.
The processing circuit 50 causes the display device 40 to display an RGB image based on the RGB image data.
The processing circuit 50 receives a touch or long press signal and acquires data indicating the specified position.
The processing circuit 50 determines whether the received signal indicates an evaluation region. When receiving a touch signal, the processing circuit 50 can determine that the signal indicates an evaluation region. When receiving a long press signal, the processing circuit 50 can determine that the signal indicates a base region. When Yes in Step 106, the processing circuit 50 performs the operation of Step S107. When No in Step 106, the processing circuit performs the operation of Step S109.
The processing circuit 50 performs edge detection to extract the region of a blemish and determines the region to be an evaluation region.
The processing circuit 50 causes the display device 40 to display the edge and label of the evaluation region.
The processing circuit 50 determines a rectangular region having a certain area including the specified position to be a base region. The rectangular region having a certain area is, for example, a region that is centered around the point that is subjected to a long press and in which m pixels are arranged vertically, and n pixels are arranged horizontally. The values of m and n are arbitrary.
The processing circuit 50 causes the display device 40 to display the perimeter and label of the base region.
The processing circuit 50 determines whether image capturing is completed. In a case where the processing circuit 50 receives a completion signal, the processing circuit 50 can determine that image capturing is completed. In a case where the processing circuit 50 does not receive a confirmation signal within a certain period of time, the processing circuit 50 can determine that image capturing is not completed. When Yes in Step S111, the processing circuit 50 performs the operation of Step S112. When No in Step S111, the processing circuit performs the operation of Step S102 again.
The processing circuit 50 generates a composite image by connecting images of the left, front, and right sides of the face 10. On the composite image, the edges and labels of the evaluation regions and the perimeter and label of the base region are superposed to indicate the registered regions.
The processing circuit 50 causes the display device 40 to display the composite image generated in Step S112.
The processing circuit 50 receives a registration signal and causes the storage device 30 to store data to be used in an evaluation in the first session. As a result, the data is registered.
The contour diagram of the blemish is a two-dimensional distribution of pixel values for a certain band. Blemishes have melanin pigmentation in the lower part of the epidermis. Since melanin pigmentation absorbs light, pixel values indicating the amounts of reflected light are lower in the evaluation regions than in the base region. The certain band may be a band that is often used to evaluate blemishes, such as a band of a wavelength of 550 nm or 650 nm. The band may have, for example, a wavelength width that is greater than or equal to 1 nm and less than or equal to 10 nm. The contour diagram of the blemish illustrated in
The area of the blemish is the area of a region enclosed by the edge of the blemish. The density of the blemish is the value obtained by dividing the average pixel value of the evaluation region by the average pixel value of the base region for the certain band. The coloration of the blemish is the ratio of pixel values for any two bands. The coloration may be, for example, the value obtained by dividing the average pixel value of the evaluation region for the 550 nm wavelength band by the average pixel value of the evaluation region for the 650 nm wavelength band. The larger this value, the lighter the yellowish color. Bands at other wavelengths may be selected to evaluate blueness and redness.
The processing circuit 50 acquires the compressed image data of the left, front, and right sides of the face 10, the data representing the composite image, and the data indicating the registered regions from the storage device 30.
The processing circuit 50 causes the display device 40 to display the composite image on which, regarding the registered regions, the edges and labels of the evaluation regions and the perimeter and label of the base region are superposed.
The processing circuit 50 receives a touch signal and acquires data indicating the evaluation region selected from among the registered regions.
The processing circuit 50 generates partial-image data corresponding to some bands, such as wavelengths of 550 nm and 650 nm, for the evaluation region and base region, for example.
The processing circuit 50 generates and outputs evaluation data on the basis of the partial-image data. The evaluation data represents evaluation results of skin condition in the evaluation region. The evaluation results may be, for example, the contour of the blemish as well as the area, density, and coloration of the blemish. The evaluation results include a comparison result between skin condition in the evaluation region and skin condition in the base region.
The processing circuit 50 causes the display device 40 to display the evaluation results.
In a case where the number of evaluation regions is greater than or equal to two, the processing circuit 50 repeatedly performs the operations of Steps S203 to S206 in accordance with an input to select an evaluation region from the user.
In the evaluation method performed in the first session according to the first embodiment, the evaluation region is determined in accordance with an input from the user. Thus, the processing load can be reduced compared to a method in which the region of a blemish is automatically extracted from the face 10 through image processing. The user determines the evaluation region himself/herself, and thus the user can easily grasp evaluation results of a blemish in a region of the face 10 which the user himself/herself wants to pay attention to. Since skin condition in the evaluation region instead of the entire region of the face 10 is evaluated, faster processing is possible.
In the following, an evaluation method in the second and subsequent sessions will be described with reference to
As illustrated in
Next, with the hyperspectral camera 20 of the evaluation apparatus 100 oriented so as to face the face 10 from the front left, the user touches the image capture button displayed by the display device 40 with the touch pen 42. As illustrated in
In a case where not only the evaluation region A but also the evaluation region B illustrated in
In a case where the user selects the confirm button, the processing circuit 50 receives a confirmation signal and causes the display device 40 to display, as illustrated in
The processing circuit 50 causes the display device 40 to display the composite image on which, regarding the registered regions in the first session, the edges and labels of the evaluation regions and the perimeter and label of the base region are superposed. The processing circuit 50 acquires data indicating the start date and time of the current session that is the second or subsequent session.
The processing circuit 50 receives a touch signal and acquires data indicating the evaluation region selected from among the registered regions.
Steps S303 to S305 are the same as Steps S102 to S104 illustrated in
The processing circuit 50 receives a touch signal and acquires data indicating the specified position.
The processing circuit 50 determines, on the basis of the specified position, the region of the blemish including the specified position to be the same as the selected evaluation region.
The processing circuit 50 performs edge detection to extract the region of the blemish, determines the region to be the current evaluation region, and adds the same label as the selected evaluation region to the current evaluation region.
The processing circuit 50 causes the display device 40 to display the edge and label of the current evaluation region.
The processing circuit 50 determines whether all the evaluation regions desired to be evaluated have been selected from the registered regions. In a case where a confirmation signal is received, the processing circuit 50 can determine that all the evaluation regions desired to be evaluated have been selected. In a case where a confirmation signal is not received within a predetermined time period, the processing circuit 50 can determine that all the evaluation regions desired to be evaluated have not been selected. When Yes in Step S310, the processing circuit 50 performs the operation of Step S311. When No in Step S310, the processing circuit 50 performs the operation of Step S302 again.
The processing circuit 50 generates composite image data obtained by superposing the edge and label of the current evaluation region.
The processing circuit 50 causes the display device 40 to display a composite image based on the composite image data generated in Step S311.
The processing circuit 50 receives a registration signal and causes the storage device 30 to store data to be used in the current evaluation. As a result, the data is registered.
As illustrated in
The spectra at all the pixels may be compared between the blemish obtained at the present time and the blemish obtained X days ago using the Spectral Angle Mapper (SAM). In SAM, each pixel has an N-dimensional vector below. The N-dimensional vector is defined by pixel values for N bands included in the target wavelength range. A change in the spectrum of a certain pixel can be checked using the angle formed by the current vector and the vector obtained X days ago at the pixel. In a case where the angle is 0°, the both spectra at the pixel are equal to each other. In a case where the absolute value of the angle is greater than 0°, the both spectra at the pixel are different from each other. By checking the angle formed by the current vector and the vector obtained X days ago at each pixel, changes in spectrum can be obtained as a two-dimensional distribution. In machine learning, learning may be performed using the correspondence between vector orientation and coloration as supervisor data. Through this machine learning, the colorations of the most central, central, and peripheral portions obtained at the present time and X days ago may be determined from the average vector of each of the most central, central, and peripheral portions, and the colorations may be compared between at the present time and X days ago.
The processing circuit 50 acquires, from the storage device 30, the compressed image data of the current face 10, data indicating the registered regions, data indicating the current evaluation region, and data representing the composite image.
The processing circuit 50 causes the display device 40 to display the composite image and a comparison target as a candidate. The edge and label of the current evaluation region are superposed on the composite image.
The processing circuit 50 receives a touch signal and acquires data indicating the current evaluation region and the comparison target.
The processing circuit 50 acquires, on the basis of the comparison target, data indicating past skin condition in the evaluation region corresponding to the current evaluation region from the storage device 30.
The processing circuit 50 generates partial-image data concerning the current evaluation region.
The processing circuit 50 generates and outputs evaluation data on the basis of the partial-image data. The evaluation data represents evaluation results of skin condition in the current evaluation region. The evaluation results include comparison results between the current skin condition and the past skin condition.
The processing circuit 50 causes the display device 40 to display the evaluation results.
With the evaluation method for the second and subsequent sessions according to the first embodiment, it is possible to know the way in which skin condition in an evaluation region changes over time. The evaluation region is determined in accordance with an input from the user, and thus the user can easily grasp evaluation results of a blemish in the region that the user wants to pay attention to on the face 10. Since not skin condition in all the evaluation regions but skin condition in the evaluation region for which the user wants to know evaluation is evaluated among the registered regions, faster processing is possible.
Next, an example of data stored in the storage device 30 will be described with reference to
Next, a modification of the evaluation apparatus 100 according to the first embodiment will be described with reference to
In the modification of the first embodiment, the processing circuit 50 performs the following operation instead of Steps S102 and S103 illustrated in
In the modification of the first embodiment, the RGB image data is not generated from the compressed image data and can be directly generated by the camera 22. Thus, the processing load can be reduced.
In this specification, image data concerning part of the user's body and including information for the four or more bands is also referred to as “first image data”, and image data concerning part of the user's body and including information for the one or more, but no more than three bands is also referred to as “second image data”.
In the first embodiment, the processing circuit 50 included in the evaluation apparatus 100 performs edge detection, generates RGB image data and partial-image data from compressed image data, and generates evaluation data on the basis of the partial-image data. In a case where an external server is connected to the evaluation apparatus 100 via a communication network, a processing circuit included in the external server may perform edge detection, may generate the RGB image data and the partial-image data from the compressed image data, or may generate the evaluation data on the basis of the partial-image data. These operations are assigned to the external server are, for example, in a case where the processing load on the processing circuit 50 included in the evaluation apparatus 100 is desired to be reduced. In this specification, the evaluation apparatus and the server are collectively referred to as an “evaluation system”. In the following, with reference to
Next, an example of an operation performed between the evaluation apparatus 100 and the server 120 in an evaluation in the first session will be described with reference to
The first processing circuit 50 receives an image capturing signal and causes the hyperspectral camera 20 to generate compressed image data of the face 10 of the user. The transmission circuit 12s included in the evaluation apparatus 100 transmits the compressed image data to the reception circuit 14r included in the server 120.
The second processing circuit 70 acquires the compressed image data.
The second processing circuit 70 generates, using the reconstruction table, RGB image data from the compressed image data. The transmission circuit 14s included in the server 120 transmits the RGB image data to the reception circuit 12r included in the evaluation apparatus 100.
The first processing circuit 50 causes the display device 40 to display an RGB image based on the RGB image data.
The first processing circuit 50 receives a touch or long press signal and acquires data indicating the specified position. The transmission circuit 12s included in the evaluation apparatus 100 transmits the data indicating the specified position to the reception circuit 14r included in the server 120.
When a touch signal is received, the second processing circuit 70 extracts the region of a blemish on the basis of the specified position, determines the region to be an evaluation region, and assigns a label to the region. The transmission circuit 14s included in the server 120 transmits data indicating the edge and label of the evaluation region to the reception circuit 12r included in the evaluation apparatus 100.
When a long press signal is received, the second processing circuit 70 determines a rectangular region having a certain area including the position that is specified with a long press to be a base region, and assigns a label to the base region. The transmission circuit 14s included in the server 120 transmits data indicating the perimeter and label of the base region to the reception circuit 12r included in the evaluation apparatus 100.
The operations of Steps S501 to S603 described above are performed every time an image of the face 10 is captured from a different angle.
The first processing circuit 50 causes the display device 40 to display a composite image for registration, the composite image being obtained by connecting the images of the face 10 captured from different angles. On the composite image, the edges and labels of the evaluation regions and the perimeter and label of the base region are superposed to indicate the evaluation regions and the base region.
The first processing circuit 50 receives a registration signal and causes the storage device 30 to store data to be used in an evaluation in the first session. As a result, the data is registered.
The first processing circuit 50 causes the display device 40 to display a composite image for evaluation on which, regarding the registered regions, the edges and labels of evaluation regions and the perimeter and label of the base region are superposed.
The first processing circuit 50 receives a touch signal and acquires data indicating the evaluation region selected from among the registered regions. The transmission circuit 12s included in the evaluation apparatus 100 transmits the data indicating the evaluation region to the reception circuit 14r included in the server 120.
The second processing circuit 70 generates partial-image data concerning the selected evaluation region.
The second processing circuit 70 generates and outputs evaluation data on the basis of the partial-image data. The evaluation data represents evaluation results of skin condition in the selected evaluation region. The second processing circuit 70 causes the storage device 60 to store the evaluation data. The transmission circuit 14s included in the server 120 transmits the evaluation data to the reception circuit 12r included in the evaluation apparatus 100.
The first processing circuit 50 causes the display device 40 to display the evaluation results. The first processing circuit 50 may store the evaluation data into the storage device 30.
Next, with reference to
The first processing circuit 50 causes the display device 40 to display a composite image for selection. On the composite image, the edges and labels of the evaluation regions and the perimeter and label of the base region are superposed to indicate the registered regions in the first session.
The first processing circuit 50 receives a touch signal and acquires data indicating the evaluation region selected from among the registered regions.
Steps S703 and S704 are the same as Steps S501 and S502 illustrated in
The first processing circuit 50 receives a touch signal and acquires data indicating the specified position. The transmission circuit 12s included in the evaluation apparatus 100 transmits the data indicating the specified position to the reception circuit 14r included in the server 120.
The second processing circuit 70 determines, on the basis of the position that is specified with a touch, the region of a blemish including the specified position to be the same as the selected evaluation region.
The second processing circuit 70 extracts the region of the blemish on the basis of the specified position, determines the region to be the current evaluation region, and assigns, to the region, the same label as the selected evaluation region. The transmission circuit 14s included in the server 120 transmits data indicating the edge and label of the current evaluation region to the reception circuit 12r included in the evaluation apparatus 100.
In a case where the user selects evaluation regions from the registered region, the operations of Steps S701 to S804 described above are performed every time an evaluation region is selected.
The first processing circuit 50 causes the display device 40 to display a composite image for registration on which the edge and label of the current evaluation region are superposed.
The first processing circuit 50 receives a registration signal and causes the storage device 30 to store data to be used in the current evaluation. As a result, the data is registered.
The first processing circuit 50 causes the display device 40 to display a composite image for evaluation on which the edge and label of the current evaluation region are superposed as well as a comparison target as a candidate.
The first processing circuit 50 receives a touch signal and acquires data indicating the current evaluation region and the comparison target. The transmission circuit 12s included in the evaluation apparatus 100 transmits the data indicating the current evaluation region and the comparison target to the reception circuit 14r included in the server 120.
The second processing circuit 70 acquires, on the basis of the data indicating the comparison target, past data indicating past skin condition in an evaluation region corresponding to the current evaluation region from the storage device 60.
The second processing circuit 70 generates partial-image data concerning the current evaluation region.
The second processing circuit 70 generates and outputs evaluation data on the basis of the partial-image data. The evaluation data represents evaluation results of skin condition in the current evaluation region. The second processing circuit 70 stores the evaluation data into the storage device 60. The transmission circuit 14s included in the server 120 transmits the evaluation data to the reception circuit 12r included in the evaluation apparatus 100.
The first processing circuit 50 causes the display device 40 to display the evaluation results. The first processing circuit 50 may store the evaluation data into the storage device 30.
In the evaluation system 200 according to the second embodiment, instead of the first processing circuit 50, the second processing circuit 70 performs edge detection, generates RGB image data and partial-image data from compressed image data, and generates evaluation data on the basis of the partial-image data. Thus, the processing load on the first processing circuit 50 can be reduced. In accordance with the processing performance of the first processing circuit 50, the first processing circuit 50 may perform part of the series of operations performed by the second processing circuit 70.
Next, with reference to
In the evaluation system 210 according to a modification of the second embodiment, the first processing circuit 50 performs the following operations instead of Steps S501, S601, and S602 illustrated in
In the evaluation system 210 according to a modification of the second embodiment, the RGB image data is not generated from the compressed image data and can be directly generated by the camera 22. Thus, the processing load on the second processing circuit 70 can be reduced.
The technology according to the present disclosure is applicable, for example, to applications for evaluating skin condition.
Number | Date | Country | Kind |
---|---|---|---|
2021-051572 | Mar 2021 | JP | national |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2022/009583 | Mar 2022 | US |
Child | 18464221 | US |