Therapeutic and aesthetic energy-based treatments are utilized for therapeutic and aesthetic treatments on target skin. Typically, medical personnel diagnose various skin conditions and set parameters of a machine that delivers an energy-based treatment. An energy-based treatment may be one that targets tissue of the target skin, gets absorbed by one or more chromophores and causes a cascade of reactions, including photochemical, photothermal, thermal, photoacoustic, acoustic, healing, ablation, coagulation, biological, tightening, or other any other physiological effect. Those reactions create the desired treatment outcomes such as permanent hair removal, hair growth, pigmented or vascular lesion treatment of soft tissue, rejuvenation or tightening, acne treatment, cellulite treatment, vein collapse, or tattoo removal which may include mechanical breakdown of tattoo pigments and crusting.
Therapeutic and aesthetic treatments focus on altering aesthetic appearance through the treatment of conditions including scars, skin laxity, wrinkles, moles, liver spots, excess fat, cellulite, unwanted hair, skin discoloration, spider veins and so on. Target skin is subjected to the treatment using energy-based system, such as laser and/or light energy-based systems. In these treatments, light energy with pre-defined parameters may be typically projected on the target skin area. Medical personnel may have to consider skin attributes such as skin type, presence of tanning, hair color, hair density, hair thickness, blood vessel diameter and depth, lesion type, pigment depth, pigment intensity, tattoo color and type, in order to decide treatment parameters to be used.
In one aspect of the current disclosure, a system for determining skin attributes and treatment parameters of target skin for an aesthetic skin diagnosis and treatment unit, comprises: a display; at least one source for illumination light; an image capture device; a source for providing energy-based treatment; a processor. Also, a memory communicatively coupled to the processor, wherein the memory stores processor-executable instructions, which, on execution, cause the processor to: activate the at least one source for illumination light to illuminate in a plurality of monochromatic wavelengths; obtain images from the image capture device in the plurality of monochromatic wavelengths; receive target skin data comprising data of each pixel of the obtained images; analyze the target skin data using a plurality of trained skin attribute models; determine, with the trained skin attribute models, at least one skin attributes classification of the target skin; analyze, with a trained skin treatment model, the at least one classification for the skin attributes of the target skin; identify, with the trained skin treatment model, treatment parameters for the source of energy-based treatment for the at least one skin attributes classification determined; and display the treatment parameters identified to treat the skin attributes.
In one aspect of the current disclosure, the system generates and displays a list of attributes of the target skin based on the analysis by the trained skin attribute models. The source of energy-based treatment is activated to treat the target skin with the treatment parameters determined. Also the plurality of trained skin attribute models are trained by; (i) providing a plurality of labelled images of at least one skin attribute stored in a database to the skin attribute models, and (ii) configuring the skin attribute models to classify the plurality of labelled images into at least one skin attribute.
In another aspect of the current disclosure the plurality of different wavelengths comprises 450 nm, 490 nm, 570 nm, 590 nm, 660 nm, 770 nm, and 850 nm. The processor is further configured, after obtaining the images, to register and align the images of the plurality of monochromatic wavelengths and to generate and display a map of the target skin with any combination of the plurality of monochromatic wavelengths or configured to generate and display a map of the target skin from the wavelengths that represent red, green, and blue.
In yet another aspect of the current disclosure one of the skin attributes is hair on the target skin and a hair mask model is one of the plurality of skin attribute models and the processor is further configured to: receive the target skin data of one monochromatic wavelengths of the plurality of monochromatic wavelengths; and determine, with the hair mask model, one of two classifications, hair or background, for each pixel of an image of the one monochromatic wavelength.
In a further aspect of the current disclosure the processor is further configured to: instruct additional skin attribute models to remove hair pixels labeled hair by the hair mask model from further analysis of target skin. One of the skin attributes is skin type and a skin type model is one of the plurality of skin attribute models. The processor is further configured to: receive skin type data comprising an average calibrated reflectance value of total pixels of each monochrome image; and determine, with the skin type model, six classifications of skin type. The skin attribute is at least one of: melanin density, vascular density or scattering.
In some aspects of the current disclosure the processor is further configured to: receive skin type data comprising a plurality of absolute reflectance values for each pixel representing the plurality of wavelengths; analyze the plurality of absolute values per pixel, with at least one of a melanin model or a vascular model, compared with a look up table (LUT) values, wherein the LUT comprises values for skin models that represent known physical models of illumination effects on human skin and represent physical measurements of concentration of the skin attributes in the target skin; and identify for each pixel the one LUT entry for at least one of melanin density or vascular density with the value closest in distance to the plurality of measured absolute values for each pixel, wherein this distance may be a similarity of certain distances.
In yet another aspect of the current disclosure one of the skin attributes is vascular lesion depth and a vascular depth model is one of the plurality of skin attribute models. The processor is further configured to: receive the target skin data of the plurality of monochromatic wavelengths; and determine a classification for each pixel, with the vascular lesion model, of one of four classifications, deep vascular, medium vascular, shallow vascular or background; and generate and display a map with markings to illustrate the classifications of vascular lesion depths.
In a further aspect of the current disclosure one of the skin attributes is pigment lesion depth and a pigment depth model is one of the plurality of skin attribute models. The processor is further configured to: receive the target skin data of two monochromatic of the plurality of monochromatic wavelengths, wherein one monochromatic wavelength represents the lowest wavelength value of the system, and the second monochromatic wavelength represents the highest wavelength value of the system; receive, from the vascular depth model, classified pixels of vascular depth; analyze the pixels not classified by the vascular depth model, for outliers in darkness for each of the two monochromatic wavelengths; determine a classification for each pixel analyzed, with the pigment lesion model, the outliers of the lowest wavelength value as shallow pigment lesions and the highest wavelength value as deep pigment lesions; and generate and display a map with markings to illustrate the classifications of pigment lesion depths.
In another aspect of the current disclosure one of the skin attributes is pigment lesion intensity and a pigment intensity model is one of the plurality of skin attribute models. The processor is further configured to: receive the target skin data of three features from each of the plurality of monochromatic images, wherein the features are a threshold of a 99-percentile of concentration of melanin representing the lesion, and a calculated median melanin level of the whole image, from a melanin density model and the 99-percentile subtracted from the calculated median melanin level; and determine, based on the features, if the pigment lesion intensity is either a light or dark lesion.
In one aspect of the current disclosure the processor is further configured to: receive the value in the LUT of at least one of, the melanin density value from the melanin model or the vascular density value from the vascular model; compute a new value for the melanin density value or the vascular density value based on setting other skin attributes on the LUT closest to zero; and generate a map of either the melanin density or the vascular density using the new value wavelengths computed.
In an additional aspect of the current disclosure the processor with the trained skin treatment model, are further configured to receive information of; treatment safety parameters, energy treatment source capability parameters, at least one skin area to treat from a user, at least one skin problem indication for treatment based on the skin area to treat from a user, output of the plurality of the skin attribute models related to the at least one skin problem indication. Then the processor and the trained skin treatment model determine, based on the information received, target skin treatment parameters of the energy-based treatment; and display the target skin treatment parameters of the energy-based treatment.
Wherein the determination of the skin treatment parameters is done with a treatment look up table and the processor is further configured to: determine which one of a plurality of skin treatment look up tables, each of the skin treatment look up tables is based on a particular skin problem indication; match the output of the plurality of the skin attribute models to a treatment parameter of the determined skin treatment look up table; and display the matched skin treatment parameters of the energy-based treatment.
Also wherein the processor with the trained skin treatment model, are further configured to: generate and display a red green and blue (RGB) image of the target skin; generate and save to memory at least one of a plurality of maps, display the at least one generated map, wherein the at least one of the plurality of maps comprises; melanin density map, vascular density map, pigment lesion depth map, vascular lesion depth map, pigment intensity, or any combination thereof. The at least one skin problem indication is at least one of; pigment lesions, vascular lesions, combination pigment and vascular lesion, hair removal, or any combination thereof.
In an additional aspect of the current disclosure there is a method for determining skin attributes and treatment parameters of target skin comprises: providing a display, at least one source for illumination light, an image capture device, a source for providing energy-based treatment, a memory and processor; activating, by the processor, the at least one source for illumination light to illuminate in a plurality of monochromatic wavelengths; obtaining, by the processor, images from the image capture device in the plurality of monochromatic wavelengths; receiving, by the processor, target skin data comprising data of each pixel of the obtained images; analyzing, by the processor, the target skin data using a plurality of trained skin attribute models; determining, by the processor with the trained skin attribute models, at least one skin attributes classification of the target skin; analyzing, by the processor with a trained skin treatment model, the at least one classification for the skin attributes of the target skin; identifying, by the processor with the trained skin treatment model, treatment parameters for the source of energy-based treatment for the at least one skin attributes classification determined; and displaying, by the processor, the treatment parameters identified to treat the skin attributes.
The method may further include the skin attribute is at least one of; melanin density, vascular density and scattering, and wherein the method further comprises: receiving, by the processor, skin type data comprising a plurality of absolute reflectance values for each pixel representing the plurality of wavelengths; analyzing, by the processor, the plurality of absolute values per pixel, with at least one of a melanin model or a vascular model, compared with a look up table (LUT) values, wherein the LUT comprises values for skin models that represent known physical models of illumination effects on human skin and represent physical measurements of concentration of the skin attributes in the target skin; and identifying, by the processor, for each pixel the one LUT entry for at least one of melanin density or vascular density with the value closest in distance to the plurality of measured absolute values for each pixel, wherein this distance may be a similarity of certain distances.
A map generation method wherein the method further comprises: receiving, by the processor, the value in the LUT of at least one of, the melanin density value from the melanin model or the vascular density value from the vascular model; computing, by the processor, a new value for the melanin density value or the vascular density value based on setting other skin attributes on the LUT closest to zero; and generating, by the processor, a map of either the melanin density or the vascular density using the new value wavelengths computed.
In yet another aspect of the current disclosure the method further comprises: receiving, by the processor with the trained skin treatment model information of; treatment safety parameters, energy treatment source capability parameters, at least one skin area to treat from a user, at least one skin problem indication for treatment based on the skin area to treat from a user, and output of the plurality of the skin attribute models related to the at least one skin problem indication. Then determining, by the processor with the trained skin treatment model, based on the information received, target skin treatment parameters of the energy-based treatment; and displaying, by the processor with the trained skin treatment model, the target skin treatment parameters of the energy-based treatment.
In a final aspect of the current disclosure the determining of the skin treatment parameters is done with a treatment look up table and the method further comprise: determining, by the processor with the trained skin treatment model, which one of a plurality of skin treatment look up tables, wherein each of the skin treatment look up tables is based on a particular skin problem indication; matching, by the processor with the trained skin treatment model, the output of the plurality of the skin attribute models to a treatment parameter of the determined skin treatment look up table; and displaying, by the processor, the matched skin treatment parameters of the energy-based treatment.
Various embodiments of the present disclosure can be further explained with reference to the attached drawings, wherein like structures are referred to by like numerals throughout the several views. The drawings shown are not necessarily to scale, with emphasis instead generally being placed upon illustrating the principles of the present disclosure. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art one or more illustrative embodiments.
Various detailed embodiments of the present disclosure, taken in conjunction with the accompanying figures, are disclosed herein; however, it is to be understood that the disclosed embodiments are merely illustrative. In addition, each of the examples given in connection with the various embodiments of the present disclosure is intended to be illustrative, and not restrictive.
Skin tissue is a very complex biological organ. Although the basic structure is common to all humans, there are many variations within the different areas in a specific individual and among individuals. Variations include skin color (melanin content in Basal layer), hair color and thickness, collagen integrity, blood vessel structure, vascular and pigmented lesions of various types, foreign objects like tattoos, etc.
Various embodiments of the present disclosure provide a technical solution by using a target skin diagnostic system that may be included in a skin treatment system to assist medical personnel to select optimal treatment presets and determine target skin attributes associated with skin conditions, skin diseases or skin reactions to treatment. In some embodiments, data of an area of skin, target skin, will be collected before and after treatment, and this data may be compared for immediate analysis of how to continue to treat the target skin. In some embodiments, target skin responses to treatment are further used to determine the efficacy of treatment and to train a treatment module, as a specific example, humidity present in the skin after treatment is determined.
The present disclosure relates to method and system for determining a plurality of attributes, features, and characteristic (hereinafter skin attributes) of target skin of a person by a skin diagnostic system that may be part of aesthetic skin treatment system. The present disclosure proposes to automate the process of determining the plurality of skin attributes by type by using one or more trained models.
The one or more trained models are trained with a huge set of parameters related to the classification of the plurality skin attributes of the target skin, to output specific skin attributes of the target skin of a person. Among the skin attributes may include, but not limited to; skin type using the Fitzpatrick scale, pigment, or melanin (hereinafter melanin), vascular or erythema (hereinafter vascular), pigment lesion intensity, pigment lesion depth, vascular lesion depth, masking hair data, and a scattering coefficient of the skin. The scattering coefficient is a measure of the ability of particles to scatter photons out of a beam of light.
In some embodiments, skin attributes may be determined for tattoo removal. In tattoo removal, the challenges are twofold. First, to destroy tattoo ink selectively, the best energy-based method such as a laser wavelength should be chosen to achieve selective absorption for the particular ink color or colors while minimizing non-specific effects. However, commonly used tattoo inks are very little regulated, and this ink composition is highly variable. As a result, what appear to be similar ink colors may have wide peak absorption range and the medical personnel has no way to determine the exact type/properties of the specific ink and thus the optimal treatment to be used. Moreover, in addition to the ink's color properties, the skin type (amount of melanin), the depth of the ink and the amount should also be considered for optimal energy based setting and clinical outcomes. Second, in addition to the ink's color properties, the skin type (amount of melanin), the depth of the ink and the amount should also be considered for optimal parameters and clinical outcomes.
Further, Principal Component Analysis (PCA) may be used which enable robust classification of valuable parameters while reducing overall dimensionality of the acquired data. In other words, PCA differentiates data features with respect to their importance to the final clinical outcome. The most relevant parameters may be employed for the development of a physical energy-based treatment interaction model, including, for example, thermal relaxation and soft tissue coagulation. Moreover, large amounts of highly correlated data allow for construction of empirical equations which are based on quantitative immediate biological responses like erythema in hair removal and frosting formation in tattoo removal treatments. Currently, immediate responses are subjectively assessed in a non-quantitative manner by medical personnel without any dynamical quantification. Details on use of PCA and of methods/system for tattoo removal is further described in U.S. application Ser. No. 17/226,235 filed 9 Apr. 2021 which is hereby incorporated by reference in its entirety.
Values and/or maps are generated by the skin diagnostic system for skin attributes, such as but not limited to; density of melanin, density of vascular, map of pigment depth, a map of vascular depth, and a map of optical properties and these properties may or may not reveal physical conditions of the target skin.
In some embodiments, as seen in
In some embodiments, the one or more modules 107 are configured such that the modules gather and/or process data results are then stored in the memory of the skin diagnostic system, as part of data 108, such as training data, operating treatments parameters data or analyzed target skin data (not shown). In some embodiments, the data 108 may be processed by the one or more modules 107. In some embodiments, the one or more modules 107 may be implemented as dedicated units and when implemented in such a manner, the modules may be configured with the functionality defined in the present disclosure to result in a novel hardware device. As used herein, the term module may refer to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a Field-Programmable Gate Arrays (FPGA), Programmable System-on-Chip (PSoC), a combinational logic circuit, and/or other suitable components that provide the described functionality. In some embodiments, the training data used to train the modules in successfully identifying of target skin attributes. In some embodiments, the unsuccessful identifying of target skin attributes is included for training a model.
In some embodiments, and as seen if
In some embodiments, the combination system further comprises a module 210 configured to control obtaining the image data with an image capture device such as a multispectral camera which may be part of a handpiece 1300. In some embodiments, the combination system further comprises a treatment component with a handpiece to deliver the energy-based treatment 1350.
The target skin data, in some embodiments, includes skin attributes or at least one attribute of the target skin tissue to be analyzed. In some embodiments, the target skin data comprises at least one pre-treatment target skin attribute (pre-treatment target skin data), and at least one real-time target skin attribute (real-time target skin data). The pre-treatment target skin data may be skin attributes associated with the target skin before performed aesthetic treatment on the target skin. The real-time target skin data may be skin attributes which are obtained in response to real-time aesthetic treatment. In some embodiments, the target skin data is obtained before, during at regular intervals of time, and immediately after the aesthetic treatment or any combination thereof. The target skin data at any time around the aesthetic treatment may be analyzed to develop different treatment parameters. The treatment may be done in a short time period, such as a laser firing, and thus the gathering of image data and decision-making will desirably also be fast, i.e. capable of delivering feedback signals in less than few milliseconds.
In some embodiments, upon receiving the target skin data, the target skin analyze module 202 may be configured to analyze the target skin data using a plurality of trained models to determine a plurality of skin attributes of the target skin. In some embodiments, the plurality of trained models may be plurality of machine learning models, deep learning models or any combination thereof. Each of the plurality of trained models may be trained separately and independently. In some embodiments, each of the plurality of trained models may be pre-trained using the training data. In some embodiments, target skin data are associated with skin attributes, and includes but are not limited to melanin, an anatomical location, spatial and depth distribution (epidermal/dermal) of melanin, spatial and depth distribution (epidermal/dermal) of blood, melanin morphology, blood vessels morphology, veins (capillaries) network morphology diameter and depth, spatial and depth distribution (epidermal/dermal) of collagen, water content, melanin/blood spatial homogeneity, hair, temperature or topography.
The apparatus may comprise a frame 1023, configured to circumscribe a target skin 1030, to stretch or flatten the target tissue 1030 for capturing of diagnostic images. In some embodiments, target skin data includes diagnostic images captured of target skin 1030. The frame 1023 may comprise one or more fiducial markers 1004. The fiducial markers 1004 may be included in the images and used for digital registration of multiple images captured of the same target tissue 1030.
The apparatus may comprise an electro-optics unit 1001, comprising an illuminator assembly 1040, an optics assembly 1061, and an image sensor assembly 1053.
The illuminator assembly 1040 may be configured to illuminate the target tissue 1030 during capturing of images. The illuminator assembly 1040 may comprise a plurality of sets of one or more illumination elements also called illumination light sources (such as LEDs), each set having a different optical output spectrum (e.g., peak wavelength). A combination of one or more of the optical spectra may be employed for illumination when capturing images of the target skin tissue 1030. Images at each optical spectrum may be captured individually, and the images subsequently combined. Alternatively, or additionally, illumination elements, of the illuminator assembly, of multiple optical spectra may be illuminated simultaneously to capture an image. The optics assembly 1061 focuses the reflected/backscattered illumination light onto an image sensor of the image sensor assembly 1053.
The apparatus may further comprise a processor 1050 in the instant example, or processor 104 from previous figures. There may be more than one processor to the skin diagnostic system. The processor 1050 may be responsible for controlling the imaging parameters of the illuminator assembly 1040 and the image sensor assembly 1053. The imaging parameters may include the frame rate, the image acquisition time, the number of frames added for an image, the illumination wavelengths, and any combination thereof. The processor 1050 may further be configured to receive an initiation signal from a user of the apparatus (e.g., pushing of a trigger button) and may be in communication with a skin diagnostic system.
In some embodiments of the skin diagnostic system, there are a plurality of preprocessing methods for the images captured. An image captured may be cropped or sized to the measurement of an energy-based treatment spot. Additional preprocessing functions that may be utilized are a quality check, an illumination correction, a registration, and a reflectance calibration.
Reflectance calibration may be done in real time. The real time calibration may be done according to the following formula:
Calibrated Image=(1 registered image/2 calibration coefficient)×(marker calibration values/markers measured)
wherein the registered image is the plurality of monochrome images aligned with each other. The calibration coefficient is a plurality of reflectance values of each monochrome image from a reflective material that may be Spectralon®. An average of the plurality of reflectance values may be used as the calibration coefficient. The calibration coefficient is usually determined at time of manufacture of the skin diagnostic system. The marker calibration value refers to fiducial markers 1304. The same process of calibration coefficient is used except that the determination is done from cropped images of only the fiducial marker, also at the time of manufacture. The markers measure the real time current value of the calibration of the fiducial marker cropped image. After pre-processing, the incoming image data may then be parsed for input in a module or model.
In some embodiments, the skin diagnostic system generates a color map or RGB image from the monochromatic images. The color map may be a 24-bit RGB image in a non-compressed or compressed image format. The construction of this image using the 650 nm wavelength, 570 nm wavelength and the 450 nm wavelength. In some embodiments, each wavelength used in the color map first has a global brightening step and a local contrast enhancement step performed before combining the wavelengths. In some embodiments, any monochrome images may be combined. Combinations of other wavelengths may have the effect of enhancing certain skin structures/conditions, as can be seen in
At block 501, the skin diagnostic system is configured to receive the target skin data comprising multi-spectral images.
At block 503, the skin diagnostic system is configured to analyze the target skin data using at least one trained model to determine attributes of the target skin.
At block 505, the system is configured to output the skin attributes of the analyzed the target skin. In some embodiments, these attributes are associated with skin conditions, skin diseases, skin reactions to treatment or any combination thereof.
In some embodiments, hair in the target skin data is automatically identified and removed (masked) from further analysis utilizing a hair mask module in the one or more modules 107. In some embodiments, a deep learning model for masking of hair is a U-Net deep learning semantic segmentation classifier model with a depth of three layers, by specific example see
In some embodiments, the hair mask model receives one monochromatic image of the target skin images. The one image may be a wavelength between about 590 nm to 720 nm. As seen in
In some embodiments, the skin type of a person's skin based on the Fitzpatrick scale is automatically determined by a skin type module in the one or more modules 107. The Fitzpatrick scale is a measure of the response of a skin to ultraviolet (UV) light and is one designation for the person's whole body. Typically, a trained medical professional makes such a determination. In some embodiments, the skin type module comprises machine learning multi-layer perceptron type neural network model, hereinafter skin type model. In some embodiments, the skin type model is trained with images of target skin labeled with the appropriate skin type numbered 1 to 6. In some embodiments, the images labeled for training were labeled by medical professional.
In some embodiments, the skin type model receives skin type data comprising an average calibrated reflectance value of the total pixels of each monochrome image [average spectrum of all the monochrome images] and the output of the skin type model is to classify the skin type into one of 6 skin types. Skin type data may be collected in a memory for further development of skin type models.
In some embodiments, the output is a skin type for the target skin to be treated and is automatically determined by a skin type module in the one or more modules.
Reflectance images from skin tissue may be determined by two physical properties, chromophore absorption and reduced scattering of the induced illumination. Integration of those parameters through tissue depth yields the reflectance image. Thus, reflectance imaging (different wavelengths, polarizations, and patterns) provides information about the basic skin optical properties up to several millimeters in depth.
In some embodiments, skin attributes related to spectral analysis are automatically determined and generated. In some embodiments, look up tables (LUT) such as
The skin attribute values may include, but are not limited to, melanin (pigment) density, vascular (erythema) density, and coefficient of scattering of light. In some embodiments, physical equations and spectral analysis are used to complete the LUT with the skin attributes per wavelengths.
In some embodiments, a machine learning model receives the image skin data and links the spectral wavelength response to skin chromophore quantities. In some embodiments this may be other skin chromophore (color producing areas of molecules) quantities, such as but not limited to vascular areas, melanin areas and collagen.
Each pixel of each of a plurality of wavelength images is input to a machine learning model to search on the LUT. Each of the plurality of skin attribute values and maps utilize a different machine learning model (hereinafter generic model) to determine each skin attribute value on target skin. For example, when seven wavelength images are employed, seven numbers for each pixel are inputted into the generic model to obtain an output of one number for each pixel. A brute force or naïve searching in a long LUT would typically analyze each line of the table and is very slow and time consuming, especially for each pixel in multiple monochrome wavelength images. Therefore, the generic model is utilized for faster and more efficient function in using the LUT.
Optionally, the generic models that use the LUT output an estimated value of a particular LUT skin attribute in the target skin for each pixel. In some embodiments, once the estimate for each pixel is determined, anomalous or outliers of the LUT skin attribute are identified. In some embodiments, the anomalous level of the LUT skin attribute is determined by the equation: Anomalous level≥Mean (LUT skin attribute)+c×STD (LUT skin attribute), where c is an arbitrary coefficient, for example 2. In some embodiments, the coefficient is determined experimentally by analyzing the distributions of the LUT skin attribute in a large number of images. The coefficient is different for each of LUT skin attributes. The non-anomalous levels are then classified, and normal skin and the anomalous levels are identified as the specific LUT skin attribute density.
Optionally, a basic map is generated illustrating the areas of the LUT skin attribute with anomalous levels and a corresponding color bar. In some embodiments, the scale of the map is adjusted such that the 0-15% range is mapped into 0-255 digital levels for display of the map. In some embodiments, the anomalous level pixels are compared to the total number of pixels to determine the relative area of the anomalous region. For example, LUT Skin Attribute Value=Total Pixels with anomalous values in Map/Total Pixels in image Displayed Units: % of image area (0-100%)
In some embodiments, the generic models are trained using a plurality of pixels from the image skin data on the LUT data to determine the attributes in target skin to be identified. The machine learning models for specific skin attributes will be further discussed below.
In some embodiments, utilizing the LUT, a melanin density and map is automatically determined by a melanin module in the one or more modules 107. In some embodiments, machine learning model for melanin density and mapping is a machine learning regression tree model for identifying melanin, hereinafter a melanin tree model, an example of which is seen in
In some embodiments, the melanin tree model receives the image skin data per pixel with a plurality of absolute reflectance values representing the plurality of wavelengths imaged. The melanin tree model then analyzes the plurality of absolute values per pixel compared to the LUT values and identifies for each pixel the one LUT entry with the value closest in distance to the plurality of measured absolute values for each pixel. This distance may be a similarity of certain distances such as, for example, cosine, Euclid distance, or any combination thereof.
In some embodiments, a map of the melanin density as illustrated in
In some embodiments, utilizing the LUT, a vascular density and map is automatically determined by a vascular module in the one or more modules 107. In some embodiments, machine learning model for vascular density and mapping is a machine learning regression tree model for identifying vascular areas, hereinafter vascular tree model. By way of specific example, the vascular tree model has a tree depth of 41 layers (see 1101 of
In some embodiments, and similar to the melanin tree model above, the vascular tree model receives the image skin data and links the spectral wavelength response to skin chromophore quantities, in this case to vascular density.
In some embodiments, the vascular tree model receives the image skin data per pixel with a plurality of absolute reflectance values representing a plurality of wavelengths imaged. The vascular tree model then analyzes the plurality of absolute values per pixel compared to the LUT values and identifies for each pixel the one LUT entry with the value closest in distance to the plurality of measured absolute values for each pixel. This distance may be a similarity of certain distances such as for example cosine, Euclid distance, or any combination thereof.
In some embodiments, a map of the vascular density, as illustrated in
In some embodiments, also utilizing the LUT, a scattering light value is automatically determined by a scattering module in the one or more modules 107. In some embodiments, machine learning model for scattering light value is a machine learning regression tree model for identifying scattering attributes of the target skin, hereinafter scattering tree model. By way of specific example, the scattering tree model has a tree depth of 35 layers, and 81,543 leaves. The LUT discussed above is used by the scattering tree model to generate the scattering value.
In some embodiments, the scattering tree model receives the image skin data per pixel with a plurality of absolute reflectance values representing multiple wavelengths imaged.
The scattering tree model then analyzes the plurality of absolute values per pixel compared to the LUT values and identifies for each pixel the one LUT entry with the value closest in distance to the plurality of measured absolute values for each pixel. This distance may be a similarity of certain distances such as for example cosine, Euclid distance, or any combination thereof.
In some embodiments, skin chromophore estimations predict treatment energy absorption to predict treatment outcome (assuming known melanin/pigment and blood response to energy/temperature). In some embodiments, the output values and maps for melanin density, vascular density and scattering light may be collected in a memory for further development of machine learning models.
At block 1401, the skin diagnostic system is configured to receive image skin data comprising a plurality of monochromatic images of target skin.
At block 1403, the skin diagnostic system is configured to analyze each pixel of the plurality of monochromatic images of target skin.
At block 1405, the system is configured to measure the absolute reflectance values for each pixel of the specific skin attribute value sought of the plurality of monochromatic images of target skin.
At block 1407, the system using the machine learning modules is configured to graph the absolute reflectance values for each pixel to the one value for the same pixel represented in the LUT.
At block 1421, the skin diagnostic system is configured to receive the LUT entry with the value closest in distance to the absolute values for the specific skin attribute sought for each pixel.
At block 1423, the skin diagnostic system is configured determine a second LUT entry value for each pixel that represents the one skin attribute to display and also sets all the additional skin attributes listed in the LUT closest to a value to zero.
At block 1425, the system is configured to generate a display of the red, green, and blue wavelengths of each pixel to the determined second LUT entry value.
In some embodiments, vascular lesion depth map is automatically determined and generated by a vascular depth module in the one or more modules 107. In some embodiments, a deep learning model for vascular depth determination is a U-Net deep learning semantic segmentation classifier model with a depth of four layers, hereinafter vascular depth model. In the current disclosure, a vascular lesion is a vascular structure apparent to the human eye. In some embodiments, the vascular depth model is trained to detect four classifications per pixel utilizing all the monochromatic images of image data. In some embodiments, the four classifications are deep depth vascular lesion, medium depth vascular lesion, shallow depth vascular lesion, and background.
In some embodiments, the vascular model is trained with labeled target skin images and each pixel labeled with the classifications in the target skin image, by way of specific example four classifications. In some embodiments, the target skin images are labeled for training with the classifications by experienced medical personnel.
In some embodiments, the vascular depth model receives a plurality of monochromatic images of the target skin data. In some embodiments, the output of the vascular depth model is an array with the image in four classifications of scores for matching each of the trained classes. In some embodiments, the vascular depth model further analyzes a four probabilities matrix (the output for the four classifications) by processing the relevant three probability layers into three possibilities matrix, that is, three depths of vascular lesions. In some embodiments, the three possibilities matrix is utilized by the vascular depth model for further analysis, and the output is the model probabilities of three classes: shallow, medium (not shallow nor deep) and deep vascular lesions. The classification with the maximal score may be chosen to be the predicted class for that pixel. In some embodiments, vascular structure lesion data may be collected in a memory of the skin diagnostic system for further development of the vascular depths models.
In some embodiments, a vascular lesion depth map, is generated by the vascular depth model comprising a semi-transparent RGB or greyscale map overlaid with marking of vascular lesions segmented into shallow, medium, and deep. In some embodiments, the vascular module determines which pixels to mark in either of the three colors or markings. The vascular lesion depth map may use different colors or other markings to denote the depths of vascular lesions as seen in
In some embodiments, a depth determination as one of shallow, medium, or deep is determined automatically for the single image of the vascular lesion map by a one label vascular lesion module in the one or more modules 107.
In some embodiments, the one label vascular lesion model is trained with labeled target skin images labeled as an image with the classifications. In some embodiments, the target skin images are labeled with the classifications by experienced medical personnel.
Each pixel label outputted by the vascular module is received into the one label vascular lesion module. The one label vascular lesion module is a machine learning classifier model and outputs a single label for the image of shallow, medium, or deep.
In some embodiments, a pigment lesions depth map is automatically determined and generated by a pigment depth module in the one or more modules 107. Typically, a pigment lesion is an abnormal level of melanin based on a person's skin type. In some embodiments, pigment depth module comprises a machine learning 1D classifier model, hereinafter a pigment depth model. In some embodiments, pigment depth model is trained with images labeled by trained medical personnel as either “epidermal” (shallow) or “junctional” (deep) pigment lesions.
In some embodiments, the pigment depth model receives results from a vascular depth model and a hair mask model, removing the hair and vascular lesion information for each pixel from the pigment depth module analysis. Thus, the removal of the hair and vascular lesion from images and data removes hair and vascular lesion pixels from any further analysis of target skin, by instructing other modules and/or models to ignore those pixels.
In some embodiments, the pigment depth model receives measured brightness intensity per pixel of an image at two wavelengths. Typically, a low wavelength value such as 450 mm captures an image shallower in the target skin and a high wavelength value such as 850 mm captures an image deeper in the target skin. Also, typically pigment/melanin absorbs light (attenuates the amount of reflected light) resulting in darker image regions.
In some embodiments, the low wavelength value image is analyzed per pixel by the pigment depth model for determination of pigment lesions and if pigment lesions are present in the pixel, it is labeled as shallow pigment lesion pixel. In some embodiments, the high wavelength value image is analyzed per pixel by the pigment depth model for determination of pigment lesions and if pigment lesions are present in the pixel, it is labeled as deep pigment lesion pixel.
In some embodiments, pigment lesion pixels are determined in either wavelength value by brightness values assigned to each pixel with 255 brightness value representing white and a zero value representing darkness. In some embodiments, the pixel outliers for darkness are identified using standard deviation calculations. In some embodiments, the pigment depth model identifies outlier brightness intensity pixels by means of statistical analysis of the distribution of intensity levels in a standard deviation. The pigment depth model then may identify a threshold to classify the outliers as pigment lesions present in the target skin. In some embodiments, more than two depths of the pigment lesions may be classified.
In some embodiments of the skin diagnostic system, a pigment lesion depth map is generated using the outlier pixels in each image of the lowest and highest value wavelengths. The pigment lesion depth map may use different colors or other markings to denote the depths of pigment lesions as seen in
In some embodiments, the pigment depth model receives a plurality monochromatic image of the target skin data and does not require input of the vascular depth model. In some embodiments, the output of the pigment depth model is an array with the image in four classifications of scores for matching each of the trained classes. In some embodiments, the pigment depth model further analyzes a four probabilities matrix (the output for the four classifications) by processing the relevant three probability layers into three possibilities matrix and background, that is three depths of pigment lesions. The output is the model probabilities of three classes: epidermal (shallow), junctional (now medium) or dermal (deep) lesions. The classification with the maximal score may be chosen to be the predicted class for that pixel.
In some embodiments, the vascular depth map and the melanin depth map are combined automatically by the skin diagnostic system. The output of the vascular depth model generated vascular lesion map and the pigment depth module pigment lesion map are combined per pixel by the system.
At block 1701, the skin diagnostic system is configured to receive image skin data comprising a plurality of monochromatic images of target skin.
At block 1703, the skin diagnostic system is configured to identify, by the vascular depth model one label for each of the plurality of monochromatic images and pixels and the label is one of four classifications. The four classifications are background, vascular lesion deep, vascular lesion medium and vascular lesion shallow.
At block 1705, the skin diagnostic system is configured to receive, by the pigment depth model, image skin data comprising two monochromatic images of target skin.
At block 1707, the skin diagnostic system is configured to also receive, by the pigment depth model, output of the vascular depth model of the classifications for vascular lesions regardless of depth. The pigment depth model does not analyze the pixels already labeled vascular lesions.
At block 1709, the system is configured to determine, by the pigment depth model, outliers in darkness of the two wavelength values.
At block 1711, the system is configured to label, by the pigment depth model, the low wavelength value outliers as shallow pigment lesions per pixel and the high wavelength value as deep pigment lesions per pixel.
At block 1713, the system is configured to generate a display utilizing each pixel of one image labeled in 1 of 6 classifications determined. The classifications are background, deep pigment lesion, shallow pigment lesion, deep vascular lesion, medium vascular lesion, and shallow vascular lesion.
In some embodiments, a vascular lesion value and a pigment lesion value are calculated and displayed for the medical personnel. For vascular, the vascular lesion regions relative to total image pixels. For example, Vascular Value=the vascular lesion regions relative to total image pixels. For example, Vascular Value=Total Pixels in Vascular Lesion Map/Total Pixels in image Displayed Units: % of image area (0-100%). For pigment, the pigment lesion regions relative to total image pixels. For example, Pigment Lesion Value=Total Pixels in Pigment Lesion Map/Total Pixels in image Displayed Units: % of image area (0-100%).
In some embodiments, the skin diagnostic system will calculate and generate a ratio, displayed in units of percentage, of a vascular lesion to pigment lesions for the medical personnel. This may aid a medical professional determining which to treat first. By way of specific example, Ratio of Vascular to Pigment Lesions=Total Pixels (or mm) in Vascular Lesion Map/Total Pixels (or mm) image in Pigment Lesion Map.
In some embodiments, pigment intensity of a pigment lesion is automatically determined by a pigment intensity module in the one or more modules 107. Typically, pigment intensity is the contrast between a pigment lesion and the background skin of target skin tissue. This contrast of the lesion to the surrounding target skin is typically determined by a medical professional thus a human eye. Therefore, the contrast is determined not only on the empirical difference between a pigment lesion intensity and surrounding target skin intensity, but also a human impression non-linear of baseline (dark or light background) of the surrounding skin. The intensity of a pigment lesion (contrast of pigment lesion to surrounding skin) may be used as a treatment input for calculating amount of energy needed to treat the pigment lesion.
In some embodiments, the pigment intensity module comprises a machine learning random forest classification model having two outputs light or dark lesion, hereinafter the pigment intensity model. Typically, the intensity, or contrast of brightness is nonlinear and depends on baseline intensity of skin. In some embodiments, the pigment intensity model is trained with images of target skin labeled with the intensity of the lesion.
In some embodiments, pigment intensity model receives data of three features from each of a plurality of monochromatic images. Feature 1 is a threshold of the 99-percentile of concentration of melanin representing the lesion. Feature 2 is a calculated median melanin level of the whole image, that is an output from the melanin density module that uses the LUT. Finally, feature 3 comprises feature 1 subtracted from feature 2. The output of the image intensity model is either a light or dark lesion. In some embodiments, the pigment intensity data is collected in the memory for further development of pigment intensity models.
In some embodiments, hair attributes in target skin are automatically determined by a hair attributes module of the one or more modules 107. In some embodiments, the hair attributes module receives the output of the hair mask model to identify the hair in the target skin. In some embodiments, the hair attributes module comprises a machine or deep learning classifier model (hereinafter hair attributes model) trained with labeled skin images of medical personnel to detect hair color and hair texture.
In some embodiments, the hair attributes model is trained to determine the color of the hair with labeled target skin images by pixel labeling the hair to a color. After subjective training with the labeled skin images, the classifier will generate the number of classifications for the hair color. In some embodiments, the hair color is four classifications of: blond/red, light brown, dark brown, and black.
In some embodiments, the hair attributes model is trained to determine the hair texture. In some embodiments, the input data for determining hair texture is one monochromatic image of the target skin images. The one image may be a wavelength between about 590 nm to about 720 nm. Each image is a known size and therefore a counting of the pixels of each hair, specifically the pixels of the width of the hair, may determine a hair diameter for each hair. Likewise, counting the pixels of hair compared to overall pixels may determine hair density. The information on hair density and hair diameters, along with subjective labeled training of a machine learning classifier may generate classifications for the hair texture. In an alternative method, a threshold of diameters for each classification may be determined for classification. In some embodiments, the hair texture is three classifications of: fine, medium and course. In some embodiments, the hair attributes model also determine hair thickness, hair melanin level, and hair count.
In some embodiments, the skin diagnostic system generates the skin attributes and maps discussed above as input to skin treatment modules of the one or more treatment modules 109 to generate parameters to treat target skin. In some embodiments, the skin treatment module comprises a machine or deep learning model (hereinafter skin treatment model). Treatment parameters may include peak energy, energy fluence, pulse width, temporal profile, spot size, wavelength, train of pulses, and others. In some embodiments, the skin diagnostic system skin attributes and maps data may be collected and stored in memory for further development and training of diagnostic and skin treatment models.
Among the skin lesions or problems (hereinafter skin problem indications) to be treated include, but are not limited to: vascular lesions, pigment lesions, melasma, telangiectasia, poikiloderma, age spots, acne facial, acne non-facial and hair removal. The vascular lesions and pigment lesions that may be treated may include but are not limited to; port whine stains hemangioma, leg veins, rosacea, erythema of rosacea, lentigines, keratosis (growth of keratin on the skin), café-au-lait, hemosiderin, becker nevus (a non-cancerous, large, brown birthmark), nevus of ota/ito (ocular dermal melanosis), acne, melasma and hyperpigmentation. Some skin conditions are a combination of pigment and vascular lesions such as, but not limited to; poikiloderma, age spots, and telangiectasia.
At block 1801, the skin treatment model of the skin diagnostic system is configured to receive predetermined target skin area to be treated and the skin problem indication to be treated from the medical personnel and/or a user of the system. In some embodiments, the treatment skin model also receives treatment safety parameters and parameters of the ability of the energy treatment source. The user of the system may choose a plurality of skin area where target skin is, as well as a plurality of skin problem indications to be treated for each skin area. The user of the system may be instructed on a display to guide the user as to where to aim the skin image handpiece 1300 to collect the image skin data required for the plurality of skin attribute models.
At block 1803, the skin treatment model of the skin diagnostic system is configured to receive output of the plurality of the skin attribute models of the target skin that are related to the predetermined skin problem indications to be treated.
In some embodiments, when the treatment is for a vascular lesion, the input to the skin treatment model is the skin type and the vascular lesion depths. In some embodiments, when the treatment is for a pigment lesion, the input to the skin treatment model is the skin type, the pigment lesion depths, and pigment intensity which employs the melanin density to determine pigment intensity. In some embodiments, when the treatment is for combined pigment and vascular lesions, the input to the skin treatment model is the skin type, the vascular lesion depths, the pigment lesion depths, and pigment intensity which employs the melanin density to determine pigment. In some embodiments when the treatment is for hair removal, the input to the skin treatment model is skin type, hair color, and hair texture.
At block 1805, the skin treatment model of the skin diagnostic system is configured to analyze the skin attribute(s) for the predetermined skin treatment. In some embodiments, a plurality of skin treatment lookup tables, one for each of the skin problem indications to be treated, is employed by the skin treatment model to match the skin attributes with the appropriate skin treatment parameters. In some embodiments, the treatment lookup tables are developed specifically for IPL energy-based treatment. The plurality of skin treatment lookup tables may be generated by medical personnel input and an huge set of data collected by clinical trials.
At block 1807, the skin diagnostic system is configured to determine and generate a display of suggested treatment parameters. In some skin diagnostic systems, the system is configured to display an RGB image of the target skin with the suggested treatment parameters. In some embodiments, a plurality of maps of the target skin related to the treatment are displayed, such as but not limited to, a melanin density map, a vascular density map, a pigment lesion depth map, a vascular lesion depth map, a pigment intensity or any combination thereof. These maps may aid the medical personnel and/or the user to change what treatment parameters to use. In some embodiments, reports of the treatment recommended, the treatment done, a plurality of maps of the target skin are all saved in a database for future training of machine learning models, for future display to the user and for future generating of a report per a patient.
In some embodiments, the skin diagnostic system 100 or the combined system 100A may include a diagnose module with a deep or machine learning model to diagnose the skin problem indications to be treated using the image skin data without medical personnel or user input required. In some embodiments, the combined system 100A also has a treatment determination module with a deep learning or a machine learning model to analyze and determine the treatment of the target skin based on the image skin data.
In some embodiments, the training of a diagnose module and/or the treatment determination module are trained with images that may use additional skin attributes data not historically considered to determine treatment. In some embodiments, the system may capture an image of a target skin area and based on the image and deep and/or machine learning determine both a treatment and output a simulation image of the target skin area after treatment.
In some embodiments, the treatment source is an intense pulse light (IPL) treatment source. In some embodiments, the IPL treatment source uses different filters for treatment and by way of specific example a special filter for acne.
In some embodiment, the treatment source has both an image capture device and treatment sourced in the same handpiece. In these cases, the handpiece may be operable in two modes: a treatment mode for delivery of energy-based treatment from, e.g. an intense pulsed light (IPL) source, to an area of a patient's skin; and a diagnostic mode for acquiring an image of the area of skin. In some embodiments, the apparatus is a handpiece of a therapeutic IPL source system, by a tethered connection to the system.
The switching of the system between the two modes may be made in a relatively short time (at most a few seconds, in some embodiments), such that in-treatment monitoring is achievable. Furthermore, in some embodiments, the apparatus sends image data to a skin diagnostic system, which analyzes images and computes an optimal treatment parameter, at least the optimal parameters for the next delivery, and sends the optimal treatment course to the controller or a display of the apparatus in real time. The apparatus enables, for example, iterations of imaging the skin after a delivery of energy-based treatment and deciding parameters of the next delivery without undue delay. “In-treatment” monitoring does not imply that monitoring is necessarily taking place at the same time as treatment. The system may switch between the treatment mode and the diagnostic mode within a period of time sufficiently short to the user, i.e. several seconds. During a treatment the system may switch between treatment and diagnostic modes multiple times. Details on a skin treatment and real time monitoring with combined treatment and image capturing handpiece is further described in PCT Serial No. PCT/IL2023/050785 filed 30 Jul. 2023 which is hereby incorporated by reference in its entirety.
Throughout the specification, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise. The phrases “in one embodiment” and “in some embodiments” as used herein do not necessarily refer to the same embodiment(s), though it may. Furthermore, the phrases “in another embodiment” and “in some other embodiments” as used herein do not necessarily refer to a different embodiment, although it may. Thus, as described below, various embodiments may be readily combined, without departing from the scope or spirit of the present disclosure.
The terms “a”, “an” and “the” mean “one or more”, unless expressly specified otherwise.
Throughout the specification, the word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or implementation of the present subject matter described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments.
The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device, or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises . . . a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
The terms “includes”, “including”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device, or method that includes a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “includes . . . a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
In addition, the term “based on” is not exclusive and allows for being based on additional factors not described, unless the context clearly dictates otherwise. In addition, throughout the specification, the meaning of “a,” “an,” and “the” include plural references. The meaning of “in” includes “in” and “on.”
It is understood that at least one aspect/functionality of various embodiments described herein can be performed in real-time and/or dynamically. As used herein, the term “real-time” or “near real-time” is directed to an event/action that can occur instantaneously or almost instantaneously in time when another event/action has occurred. For example, the “real-time processing,” “real-time computation,” and “real-time execution” all pertain to the performance of a computation during the actual time that the related physical process (e.g., a user interacting with an application on a mobile device) occurs, in order that results of the computation can be used in guiding the physical process. In some embodiments, events and/or actions in accordance with the present disclosure can be in real-time, near real-time, and/or based on a predetermined periodicity of at least one of: nanosecond, several nanoseconds, millisecond, several milliseconds, second, several seconds, minute, several minutes, hourly, several hours, daily, several days, weekly, monthly, etc. As used herein, the term “dynamically” and term “automatically,” and their logical and/or linguistic relatives and/or derivatives, mean that certain events and/or actions can be triggered and/or occur without any human intervention.
Computer systems, and systems, as used herein, can include any combination of hardware and software. Examples of software may include software components, programs, applications, operating system software, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, Application Programming Interfaces (API), computer code, data, data variables, or any combination thereof that can be processed by a computing device as computer-executable instructions.
In some embodiments, one or more of computer-based systems of the present disclosure may include or be incorporated, partially or entirely into at least one Personal Computer (PC), laptop computer, tablet, portable computer, smart device (e.g., smart phone, smart tablet or smart television), Mobile Internet Device (MID), messaging device, data communication device, server computer, and so forth.
The illustrated of method
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
This application is a continuation to U.S. Provisional Application No. 63/428,827 filed Nov. 30, 2022, entitled “System and Method for Skin Type Determination”, U.S. Provisional Application No. 63/428,832 filed Nov. 30, 2022, entitled “System and Method for Determining Human Skin Attributes”, U.S. Provisional Application No. 63/428,835 filed Nov. 30, 2022, entitled “System and Method for Masking Hair in a Skin Diagnostic System”, U.S. Provisional Application No. 63/428,849 filed Nov. 30, 2022, entitled “System and Method for Identifying Vascular Structure Depth in Skin”, U.S. Provisional Application No. 63/428,877 filed Nov. 30, 2022, entitled “System and Method for Determining Pigment Intensity in a Diagnostic System”, and U.S. Provisional Application No. 63/428,892 filed Nov. 30, 2022, entitled “System and Method for Identifying Pigment Structure Depth in skin”, the entire contents of these applications are herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
63428827 | Nov 2022 | US | |
63428832 | Nov 2022 | US | |
63428835 | Nov 2022 | US | |
63428849 | Nov 2022 | US | |
63428877 | Nov 2022 | US | |
63428892 | Nov 2022 | US |