Embodiments of the present invention are in the field of evaluating and training a machine learning (ML) module when its corresponding truth data sets are unavailable, using a second trainable ML module. Embodiments are applicable to automated body measurements.
The statements in the background of the invention are provided to assist with understanding the invention and its applications and uses, and may not constitute prior art.
There are multiple applications in which machine learning (ML) modules need to be trained, where corresponding ground truth data sets are not necessarily available, complete, or reliable.
In automated body measurements, obtaining an accurate estimate of the measurements of a user has many useful applications. For example, clothing, accessory, and footwear retail require estimation of body measurements. Besides, fitness tracking and weight loss tracking require estimation of body weight. Accurately estimating clothing size and fit can be based on body part length and body weight measurements. Such an estimation can be performed with machine learning through a multi-stage process having user images as an input and one or more body or body-part measurements as an output. The annotation of user images is often required as an initial stage in this process, where annotation is the generation of annotation keypoints or annotation lines indicating corresponding body feature measurement locations underneath user clothing for one or more identified body features (e.g., height, size of foot, size of arm, size of torso, etc.). Image annotations may be carried out through one or more annotation ML modules that have been trained on each body feature, such as an annotation deep neural network (DNN).
The second stage of the process uses the keypoint or line annotations as an intermediate input to generate one or more body or body part measurements. This stage is carried out through one or more ML modules that have been trained to generate one or more measurements from keypoint or line annotations of one or more body features, such as a regressor. Other machine learning methods are also within the scope of the annotation and measurement ML modules. For example, other ML algorithms including, but are not limited to, nearest neighbor, decision trees, support vector machines (SVM), Adaboost, Bayesian networks, fuzzy logic models, various neural networks including deep learning networks, evolutionary algorithms, and so forth, are within the scope of the present invention. In the context of the present disclosure, the above ML methods represent different ML types.
Prior to deployment, pre-trained ML models for the two ML modules may need to be evaluated and compared, whereas untrained models may need to also be trained, verified, and tested. Evaluating and training the ML modules usually requires at least three corresponding ground truth data sets representing the input (e.g., user images), the output (e.g., measurements), and the intermediate input (e.g., keypoints); where the “ground truth” qualifier is used for output data sets, but also for corresponding input-output data sets comprising an input data set and a corresponding ground truth output data set.
Importantly, while corresponding input-output data sets (i.e., user images and measurements) are readily available through scanners, and while corresponding intermediate-output data sets (i.e., keypoints and measurements) are easily generated artificially, obtaining corresponding input and intermediate data sets is difficult.
Annotation ML modules are usually evaluated and trained using manually determined keypoints, where body segmentation, i.e., estimating a sample human's body underneath the clothing, and body annotation, i.e., drawing keypoints or lines for each body feature for the sample human, are both carried out manually by a human annotator. The annotation ML modules are then trained on the manually annotated images collected and annotated for thousands of sample humans.
Such evaluation and training data for the annotation ML is difficult to obtain. Furthermore, even when available, it is difficult to assess for quality and accuracy.
Therefore, it would be an advancement in the state of the art to provide a system and method for estimating the performance of a pre-trained annotation ML module or for training an untrained annotation ML module without access to the intermediate ground truth data set, using only corresponding intermediate-output and input-output data sets. A related method can also be used to evaluate different human annotators or different human or non-human annotation schemes.
There are other applications in which machine learning modules need to be trained where corresponding ground truth data sets are not necessarily available, complete, or reliable.
It is against this background that the present invention was developed.
The present invention relates to methods and systems for evaluating or training a machine learning (ML) module for image annotation when its corresponding truth data sets are unavailable or unreliable. Related computer-implemented methods can be used to evaluate human annotators and annotation schemes.
More specifically, in various embodiments, the present invention is a computer-implemented method for evaluating a first machine learning module (MAB) having a first input and a first output, wherein the first machine learning module (MAB) is connected to a second machine learning module (MBC) having a second input and a second output, and wherein the first output of the first machine learning module (MAB) is the second input of the second machine learning module (MBC), the computer-implemented method executable by a hardware processor, the method comprising: receiving an intermediate data set (B1) and a corresponding output data set (C1), wherein the intermediate data set (B1) represents a data set for the second input of the second machine learning module (MBC), and wherein the output data set (C1) represents a corresponding ground truth data set for the second output of the second machine learning module (MBC); training the second machine learning module (MBC) using the intermediate data set (B1) and the output data set (C1); receiving a system input data set (A2) and a corresponding system output data set (C2), wherein the system input data set (A2) represents a data set for the first input of the first machine learning module, and wherein the system output data set (C2) represents a corresponding ground truth data set for the second output of the second machine learning module (MBC); generating a first evaluation data set (C′), wherein each data point in the first evaluation data set (C′) is generated by the second machine learning module (MBC) when a corresponding data point of the system input data set (A2) is input to the first machine learning module; and evaluating the first machine learning module (MAB) using a loss function based on a first distance metric between the first evaluation data set (C′) and the system output data set (C2).
In another embodiment, the method further comprises substituting the first machine learning module (MAB) with a third machine learning module (NAB) having a third input and a third output, such that the third output of the third machine learning module (NAB) is the second input of the second machine learning module (MBC); generating a second evaluation data set (C″), wherein each data point in the second evaluation data set (C″) is generated by the second machine learning module (MBC) when a corresponding data point of system input data set (A2) is input to the third machine learning module (NAB); evaluating the third machine learning module (NAB) using the loss function based on a second distance metric between the second evaluation data set (C″) and the system output data set (C2); and selecting one of the first machine learning module (MAB) and the third machine learning module (NAB) based on the loss function.
In one embodiment, the method further comprises tuning the parameters of the first machine learning module (MAB) based on the loss function.
In one embodiment, the first machine learning module (MAB) is a different type of machine learning module than the second machine learning module (MBC).
In one embodiment, the first machine learning module (MAB) has a different type of output than the second machine learning module (MBC).
In one embodiment, the method further comprises training the first machine learning module (MAB) using the loss function, the system input data set (A2), and the system output data set (C2), wherein the trained second machine learning module (MBC) is fixed.
In various embodiments, the system input data set (A2) comprises photos of clothed individuals, the intermediate data set (B1) comprises keypoint annotations of one or more body parts under clothing, and the output data sets (output data set (C1) and system output data set (C2)) comprise measurements of the one or more body parts.
In one embodiment, the first machine learning module (MAB) is selected from the group consisting of a deep neural network (DNN) and a regressor.
In one embodiment, the first machine learning module (MAB) is a residual neural network (ResNet).
In another embodiment, the second machine learning module (MBC) is selected from the group consisting of a deep neural network (DNN) and a regressor.
In yet another embodiment, the first distance metric is a batch distance measure selected from the group consisting of a mean absolute error (MAE), a mean squared error (MSE), a mean squared deviation (MSD), and a mean squared prediction error (MSPE).
In one embodiment, the method further comprises receiving an intermediate output data set (B2) corresponding to the system input data set (A2), wherein the intermediate output data set (B2) represents a ground truth data set for the first output of the first machine learning module (MAB); and generating an intermediate evaluation data set (B′), wherein each data point in the intermediate evaluation data set (B′) is generated by the first machine learning module (MAB) when a corresponding data point of the system input data set (A2) is input to the first machine learning module, wherein the loss function is based on the first distance metric between the first evaluation data set (C′) and the system output data set (C2) and a third distance metric between the intermediate evaluation data set (B′) and the intermediate output data set (B2).
In other embodiments, the present invention is a computer-implemented method for evaluating a first annotator (TAB) generating keypoint annotations of one or more body parts under clothing from one or more photos of clothed individuals, wherein the keypoint annotations are input to a machine learning module (MBC) used to generate one or more body part measurements, the computer-implemented method executable by a hardware processor, the method comprising: receiving a keypoint data set (B1) and a corresponding measurement data set (C1), wherein the keypoint data set (B1) represents a data set input for the machine learning module (MBC), and the measurement data set (C1) represents a corresponding ground truth output data set for the machine learning module (MBC); training the machine learning module (MBC) using the keypoint data set (B1) and the measurement data set (C1); receiving a photo data set (A2) and a corresponding measurement data set (C2), wherein the photo data set (A2) comprises photos of clothed individuals, and the measurement data set (C2) comprises measurements of one or more body parts of the clothed individuals; generating a first evaluation data set (C′), wherein each data point in the first evaluation data set (C′) is a body part measurement generated by the machine learning module (MBC) when a corresponding photo of the photo data set (A2) is annotated by the first annotator (TAB); and evaluating the first annotator (TAB) using a loss function based on a distance metric between the first evaluation data set (C′) and the measurement data set (C2).
In one embodiment, the method further comprises substituting the first annotator (TAB) with a second annotator (KAB), wherein the keypoint annotations generated by the KAB are input to the machine learning module (MBC) to generate one or more body part measurements; generating a second evaluation data set (C″), wherein: each data point in the second evaluation data set (C″) is a body part measurement generated by the machine learning module (MBC) when a corresponding photo of the photo data set (A2) is annotated by the second annotator (KAB); evaluating the performance of the second annotator (KAB) using the loss function based on the distance metric between the second evaluation data set (C″) and the measurement data set (C2); and selecting one of the first annotator (TAB) and the second annotator (KAB) based on the loss function.
In various embodiments, a computer program product is disclosed. The computer program may be used for evaluating or training a machine learning (ML) module for image annotation when its corresponding truth data sets are unavailable, or for evaluating human annotators and other non-human (e.g., computer-based) annotation schemes, and may include a computer readable storage medium having program instructions, or program code, embodied therewith, the program instructions executable by a processor to cause the processor to perform the steps recited herein.
In various embodiment, a system is described, including a memory that stores computer-executable components; a hardware processor, operably coupled to the memory, and that executes the computer-executable components stored in the memory, wherein the computer-executable components may include components communicatively coupled with the processor that execute the aforementioned steps.
In another embodiment, the present invention is a non-transitory, computer-readable storage medium storing executable instructions, which when executed by a processor, causes the processor to perform a process for evaluating or training a machine learning (ML) module for image annotation when its corresponding truth data sets are unavailable, or for evaluating human annotators and annotation schemes, the instructions causing the processor to perform the aforementioned steps.
In another embodiment, the present invention is a system for evaluating or training a machine learning (ML) module for image annotation when its corresponding truth data sets are unavailable, or for evaluating human annotators and annotation schemes, the system comprising a user device having a 2D camera, a processor, a display, a first memory; a server comprising a second memory and a data repository; a telecommunications-link between said user device and said server; and a plurality of computer codes embodied on said first and second memory of said user-device and said server, said plurality of computer codes which when executed causes said server and said user-device to execute a process comprising the aforementioned steps.
In yet another embodiment, the present invention is a computerized server comprising at least one processor, memory, and a plurality of computer codes embodied on said memory, said plurality of computer codes which when executed causes said processor to execute a process comprising the aforementioned steps.
Other aspects and embodiments of the present invention include the methods, processes, and algorithms comprising the steps described herein, and also include the processes and modes of operation of the systems and servers described herein.
Yet other aspects and embodiments of the present invention will become apparent from the detailed description of the invention when read in conjunction with the attached drawings.
Embodiments of the present invention described herein are exemplary, and not restrictive. Embodiments will now be described, by way of examples, with reference to the accompanying drawings, in which:
This application is related to U.S. Ser. No. 16/195,802, filed on 19 Nov. 2018, which issued as U.S. Pat. No. 10,321,728, issued on 18 Jun. 2019, entitled “SYSTEMS AND METHODS FOR FULL BODY MEASUREMENTS EXTRACTION,” which itself claims priority from U.S. Ser. No. 62/660,377, filed on 20 Apr. 2018, and entitled “SYSTEMS AND METHODS FOR FULL BODY MEASUREMENTS EXTRACTION USING A 2D PHONE CAMERA,” the entire disclosures of both of which are hereby incorporated by reference in their entireties herein.
With reference to the figures provided, embodiments of the present invention are now described in detail.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention can be practiced without these specific details. In other instances, structures, devices, activities, and methods are shown using schematics, use cases, and/or flow diagrams in order to avoid obscuring the invention. Although the following description contains many specifics for the purposes of illustration, anyone skilled in the art will appreciate that many variations and/or alterations to suggested details are within the scope of the present invention. Similarly, although many of the features of the present invention are described in terms of each other, or in conjunction with each other, one skilled in the art will appreciate that many of these features can be provided independently of other features. Accordingly, this description of the invention is set forth without any loss of generality to, and without imposing limitations upon, the invention.
In the present disclosure, the term “2D phone camera” is used to represent any traditional camera embedded in, or connected to, computing devices, such as smart phones, tablets, laptops, desktops, and the like. The terms “user images” and “photos” represent photos taken using such devices.
The evaluation or training of a ML module, such as MAB and MBC, requires corresponding input-output data points. Specifically, evaluating or training an MBC ML model requires (B, C) ground truth data sets (i.e., 106 and 110), whereby each data point in the B data set has one corresponding data point in the C data set. Similarly, evaluating or training a MAB ML model requires (A, B) ground truth data sets (i.e., 102 and 106), whereby each data point in the A data set has one corresponding data point in the B data set. Such evaluation and training data sets are usually collected from specifically designed measurement or data collection campaigns, as is discussed in the example setup of
The particularly of this setup is that while corresponding input-output data sets for evaluating or training MBC are available, corresponding input-output data sets for evaluating or training MAB are unavailable or unreliable. Rather, corresponding global, or system, input-output data, represented in this case by data sets 102 for A and 110 for C, is available.
The unavailability (or unreliability) of corresponding input-output ground truth data sets for training a ML model (e.g., 104) may stem from a number of practical factors such as the high difficulty, cost, duration, or complexity of existing data collection mechanisms. Similarly, the availability of global input-output ground truth data (also referred to herein as system input-output ground truth data, e.g., 102, 110) may be facilitated by the relative ease, low cost, speed, or simplicity of the corresponding data collection mechanisms. These factors are further illustrated in the context of the example of
Accurately estimating various body-related physical quantities such as body measurements (e.g., height), body part measurements (e.g., arm or foot dimensions), body weight, etc., can be performed through a multi-stage process having user images as an input and one or more body or body part measurements as an output. The annotation of user images is often required as an initial stage in this process, where annotation is the generation of annotation keypoints or annotation lines indicating corresponding body feature measurement locations underneath user clothing for one or more identified body features (e.g., height, size of foot, size of arm, size of torso, etc.). Image annotations may be carried out through one or more annotation ML modules that have been trained on each body feature, such as an annotation deep neural network (DNN). In the application of
The second stage of the process is a measurement stage where the keypoint annotations (B) are used as an intermediate input to generate one or more body or body part measurements (C). This stage is carried out through one or more ML modules 218 that have been trained to generate one or more measurements of one or more body features (C) from the keypoint annotations (B). In
Prior to deployment, pre-trained ML models for the two ML modules (214, 218) may need to be evaluated and compared. Furthermore, untrained models of the two ML modules (214, 218) may need to also be trained, verified, and tested. Evaluating and training the ML modules usually requires at least three corresponding ground truth data sets representing the input user images 212 (A), the output measurements 220 (C), and the intermediate input keypoints 216 (B).
In this application, corresponding input-output (A, C) data sets (i.e., user images 212 and measurements 220) are readily available through 3D scanners, where the same individuals are photographed with clothing, yielding an image data set 212, and scanned (see
Obtaining corresponding input 212 and intermediate 216 data sets, however, is difficult. Annotation ML modules are usually evaluated and trained using manually determined keypoints, where body segmentation, i.e., estimating a sample human's body underneath the clothing, and body annotation, i.e., drawing keypoints or lines for each body feature for the sample human, are both carried out manually by a human annotator. The annotation ML modules are then trained on the manually annotated images collected and annotated for thousands of sample humans.
Such ground truth evaluation and training data for the annotation ML 214 is time-consuming, costly, and hard to obtain as it requires the manual labor of multiple annotators. Furthermore, annotation accuracy and clarity need to be assessed ahead of any use of the generated corresponding (A, B) data sets for the evaluation or training of annotation ML modules 214. The variation in accuracy and quality emanates from the differences in manual annotator performance, but also from the performance variations among multiple annotation mechanisms used by the annotators (e.g., computer-aided manual annotation, scanned physical image annotation, etc.).
The current invention hence addresses the evaluation and training of a first ML module (104, 214) without corresponding ground truth input (102, 212) and intermediate (106, 216) data sets by using existing a second ML module (108, 218), its corresponding intermediate (106, 216) and output (110, 220) data sets, and corresponding global (or system) input (102, 212) and output (110, 220) data sets. Related methods are also disclosed to evaluate a process or transformation such as annotation (324). The “unavailability” of input (102, 212) and intermediate (106, 216) data sets in
It is important to note that the disclosed methods to evaluate one or more human annotators 324 can be used to also evaluate one or more annotation mechanisms. The term “annotator” henceforth generally includes human and non-human (e.g., computer-based) annotation schemes.
In a first step shown on the right side of the figure, a measurement regressor module designed to generate measurements for one or more body parts underneath clothing is trained 420 using input-output truth data sets 418 obtained from a database such as a mesh library 412. In the example embodiment of
In a second step shown on the left side of the figure, the ground truth system input and output data sets 410 are received from 3D body scans of one or more users 402 using a 3D body scanner 404. In
In a third step (not shown in
Finally, in a fourth step shown at the bottom of the figure, the training 424 of the keypoint annotation DNN 422 is carried out using a loss function based on a distance metric between the generated evaluation measurement data set and the system ground truth data set 408, leading to a trained keypoint annotation DNN 426. The training method is further discussed in the context of
In a first step shown on the right side of the figure, a measurement regressor module designed to generate measurements for one or more body parts underneath clothing is trained 520 using input-output truth data sets 518 obtained from a database such as a mesh library 512. As in
In a second step shown on the left side of the figure, the ground truth system input and output data sets 510 are received from 3D body scans of one or more users 502 using a 3D body scanner 504. As in
In a third step (not shown in
Finally, in a fourth step shown at the bottom of the figure, the evaluation 524 of a set of trained keypoint annotation DNNs 522 is carried out using a loss function based on a distance metric between the generated evaluation measurement data set and the system ground truth data set 508, leading to the evaluation and selection 524 of one or more trained keypoint annotation DNN 526, where the selection is based on the evaluation. The evaluation method is further discussed in the context of
In a first step (STEP 1), the second ML module MBC 638 is trained using its received (available) input-output truth data sets B1 636 and C1 640. In this step, the first (target) ML module 634 is not used.
In a second step (STEP 2), the ground truth input 642 (A2) and output 650 (C2) data sets are received, where A2 642 represents input for the evaluation or training target ML module MAB 644, and the C2 650 represents corresponding ground truth output for the second ML module MBC 648. A2 and C2 are hence global (or system) ground-truth input-output data sets spanning the concatenated ML modules (shown in a dashed box). A corresponding intermediate data set (B2) 646 is either unavailable, difficult to obtain, or difficult to assess for quality.
In a third step (STEP 3), an evaluation data set (C′) 660 is generated by passing one or more data points from the input data set A2 652 through the concatenated ML modules (shown in a dashed box). Hence, each data point in C′ is the output of the second ML module MBC 658 when a corresponding data point of A2 is input to the target ML module MAB 654. B′ 656 represents a corresponding intermediate evaluation data set (B′), where each data point in B′ is the output of the first ML module MAB 654 when a corresponding data point of A2 is input to MAB.
Finally, in a fourth step (STEP 4), an evaluation of the target ML module MAB 654 is carried out using a loss function based on a distance metric between the evaluation data set (C′) 660 and the output data set (C2) 650. Such an evaluation can be based on corresponding portions of the input, output, and evaluation sets (A2, C2, and C′) rather than on their entirety. For example, in a ML training process, the corresponding ground truth data sets are usually divided into corresponding batches and used successively and repeatedly to modify the parameters of a ML model. In such a training context, the evaluation of the target ML module MAB can be regarded as a first step to its training, validation, and testing (see discussion and example below).
The intermediate evaluation data set (B′) may be unavailable (e.g., difficult to measure), unreliable, or partially reliable. In some embodiments of the invention, ground truth for the intermediate output (e.g., “B2”) may be available and may be used, together with the intermediate evaluation data set (B′), for the evaluation step, alongside C′ and C2, as discussed below in more detail.
The methods described herein can be applied where more than one ML module is attached to the target ML module to be evaluated or trained. In reference to
The above generalized scenario requires two conditions to be satisfied. First, corresponding ground truth input-output data sets must be available to train each of the ML modules other than the target ML module (e.g., (B1, C1) in
In the example scenario of
Illustrative steps for training the two target ML modules T1 and T2 are shown in a solution listing at the bottom of
It is important to note that, following any training step in the solution steps 730 of
The evaluation method comprises receiving 802 an intermediate data set (B1) and a corresponding output data set (C1), wherein B1 represents input for MBC, and C1 represents corresponding ground truth output for MBC. The method then comprises training 804 module MBC using B1 and C1. The evaluation method also comprises receiving 806 an input data set (A2) and a corresponding output data set (C2), wherein A2 represents input for MAB, and C2 represents corresponding ground truth output for MBC. The receiving of (B1, C1) 802 and (A2, C2) 806 may occur in any order.
The evaluation method then comprises generating 808 a first evaluation data set (C′), wherein each data point in C′ is the output of MBC when a corresponding data point of A2 is input to MAB. Finally, the evaluation method comprises evaluating 810 the first machine learning module (MAB) using a loss function based on a distance metric between the evaluation data set (C′) and the output data set (C2). Loss function computation is further discussed below.
As in the evaluation method of
The selection method then comprises generating 908 a first evaluation data set (C′), wherein each data point in C′ is the output of the previously trained MBC when a corresponding data point of A2 is input to MAB, and MAB is connected to MBC. The selection method then comprises evaluating 912 the first machine learning module (MAB) using a loss function based on a distance metric between the evaluation data set (C′) and the output data set (C2).
The selection method also comprises generating 910 a second evaluation data set (C″), wherein each data point in C″ is the output of the previously trained MBC when a corresponding data point of A2 is input to NAB, and NAB is connected to MBC. The selection method then comprises evaluating 914 the third machine learning module (NAB) using a loss function based on a distance metric between the evaluation data set (C″) and the output data set (C2).
Finally, the selection method comprises selecting 916 one of MAB and NAB based on the loss function.
In various embodiments of the present invention, the first machine learning module (MAB) 104, 214, 634, 644, 654, may be deep neural network (DNN) or a regressor. In particular, the first machine learning module (MAB) may be a residual neural network (ResNet), or a DNN based on a ResNet, as discussed below in the context of
Other machine learning methods are also within the scope of the annotation and measurement ML modules. For example, other ML algorithms including, but are not limited to, nearest neighbor, decision trees, support vector machines (SVM), Adaboost, Bayesian networks, fuzzy logic models, various neural networks including deep learning networks, evolutionary algorithms, and so forth, are within the scope of the present invention. In the context of the present disclosure, the above ML methods represent different ML types.
In various embodiments of the present invention, the first ML module (MAB) 104, 214, 634, 644, 654 is a different type of machine learning module than the second ML module (MBC) 108, 218, 638, 648, 658. ML types denote ML methods using distinct architectures and characteristic parameter sets. For example, decision trees, nearest neighbor algorithms, various neural networks (e.g., CNNs, ResNets), regressors, SVMs, fuzzy logic models, and evolutionary algorithms represent different ML types.
In various embodiments of the present invention, the first ML module (MAB) 104, 214, 634, 644, 654 has a different type of output than the second ML module (MBC) 108, 218, 638, 648, 658. In the example of
In addition to the arguments discussed above relative to the distinctness and meaningfulness of outputs, the methods disclosed herein are distinct from the practice of freezing during neural network training in other crucial ways. First, contrary to the one or more neural network layers that are frozen, the methods disclosed herein require reliable input-output ground truth data to be available for the ML module to be “fixed” (e.g., module MBC is
In some embodiments, in addition to the evaluation of a first ML module (MAB) using the loss function described in
As discussed above in the context of
The training method then comprises generating 1008 a first evaluation data set (C′), wherein each data point in C′ is the output of MBC when a corresponding data point of A2 is input to MAB. Finally, the training method comprises training 1010 the first machine learning module (MAB) using a loss function based on a distance metric between the evaluation data set (C′) and the output data set (C2), wherein the parameters of the trained MBC are fixed.
As discussed above in the context of
The methods described in the present disclosure can be used to evaluate any transformation T operating on an input to generate a useful output. One such transformation is manual annotation, a transformation converting images of body parts under clothing into keypoints of body parts under clothing, as depicted in
The evaluation method comprises receiving 1102 a keypoint data set (B1) and a corresponding measurement data set (C1), wherein B1 represents input for the MBC, and C1 represents corresponding ground truth output for the MBC. MBC is then trained 1104 using B1 and C1, as is the case in the ML module evaluation method. The evaluation process also comprises receiving 1106 a photo data set (A2) and a corresponding measurement data set (C2), wherein A2 comprises photos of clothed individuals, and C2 comprises measurements of one or more body parts of the clothed individuals.
A first evaluation data set (C′) is then generated 1108, wherein each data point in C′ is a body part measurement generated by MBC when a corresponding photo of A2 is manually annotated by TAB. Finally, the annotator evaluation method comprises evaluating 1110 the first annotator (TAB) using a loss function based on a distance metric between the evaluation data set (C′) and the measurement data set (C2).
The selection method comprises receiving 1202 a keypoint data set (B1) and a corresponding measurement data set (C1), wherein B1 represents input for the MBC, and C1 represents corresponding ground truth output for the MBC. MBC is then trained 1204 using B1 and C1, as is the case in the ML module evaluation method. The selection process also comprises receiving 1206 a photo data set (A2) and a corresponding measurement data set (C2), wherein A2 comprises photos of clothed individuals, and C2 comprises measurements of one or more body parts of the clothed individuals.
A first evaluation data set (C′) is then generated 1208, wherein each data point in C′ is a body part measurement generated by MBC when a corresponding photo of A2 is manually annotated by TAB. The first annotator (TAB) is then evaluated 1212 using a loss function based on a distance metric between the evaluation data set (C′) and the measurement data set (C2).
A second evaluation data set (C″) is also generated 1210, wherein each data point in C″ is a body part measurement generated by MBC when a corresponding photo of A2 is manually annotated by KAB. The second annotator (KAB) is then evaluated 1214 using a loss function based on a distance metric between the evaluation data set (C″) and the measurement data set (C2).
Finally, the annotator selection method comprises selecting 1216 one of the TAB and the KAB based on the loss function.
In some embodiments, the present invention is therefore a computer-implemented method for evaluating a first annotator (TAB) generating keypoint annotations of one or more body parts under clothing from one or more photos of clothed individuals, wherein the keypoint annotations are input to a machine learning module (MBC) used to generate one or more body part measurements, the computer-implemented method executable by a hardware processor, the method comprising: receiving a keypoint data set (B1) and a corresponding measurement data set (C1), wherein the B1 represents a data set input for the MBC, and the C1 represents a corresponding ground truth output data set for the MBC; training the MBC using the B1 and the C1; receiving a photo data set (A2) and a corresponding measurement data set (C2), wherein the A2 comprises photos of clothed individuals, and the C2 comprises measurements of one or more body parts of the clothed individuals; generating a first evaluation data set (C′), wherein each data point in C′ is a body part measurement generated by the MBC when a corresponding photo of A2 is annotated by the TAB; and evaluating the first annotator (TAB) using a loss function based on a distance metric between the evaluation data set (C′) and the measurement data set (C2).
In one embodiment, the method further comprises substituting the TAB with a second annotator (KAB), wherein the keypoint annotations generated by the KAB are input to the MBC to generate one or more body part measurements; generating a second evaluation data set (C″), wherein: each data point in C″ is a body part measurement generated by the MBC when a corresponding photo of A2 is annotated by the KAB; evaluating the performance of the KAB using the loss function based on the distance metric between the C″ and the C2; and selecting one of the TAB and the KAB based on the loss function.
The example PSPNet of
ResNet backbone architectures may also include ResNeXt. In one embodiment, the ResNeXt algorithm is implemented as described in Saining Xie, et al., “Aggregated Residual Transformations for Deep Neural Networks,” CVPR 2017, Nov. 9, 2017, available at arXiv:1611.05431, which is hereby incorporated by reference in its entirety herein as if fully set forth herein.
In the example of
PSPNet, ResNet, and ResNeXt are only illustrative deep learning network algorithms that are within the scope of the present invention, and the present invention is not limited to the use of PSPNet or ResNet. Other ML algorithms are also within the scope of the present invention. For example, in one embodiment of the present invention, a convolutional neural network (CNN) is utilized as a ML module to extract and to annotate body parts.
The final objective is to train the annotation DNN, denoted G, based on a loss function expressed by the following function:
G*=argG min {∥zR−RGT(G(xG))∥2}
where:
In another embodiment of the present invention, a partial or unreliable form of the intermediate ground truth data set may be available. For example, such data may be generated through simulation or any other external evaluation method. Referring to
G*=argG min {λ∥yG−G(xG)∥2+∥zR−RGT(G(xG)∥2}
where:
Using the loss functions described above, the DNN hence learns the mapping from the image set xG and a trained regressor loss (LR), with an optional weighted adjustment from DNN loss term (LG) based on a pseudo-label (landmark heatmap) yG.
In one embodiment, a training procedure associated with the loss functions described above is the following:
It is important to note that the steps listed under (2.1) in the algorithm above operate on batches of data. Hence, corresponding data sets (e.g., input images and corresponding ground truth measurement outputs) are divided into batches for steps (2.1.1) through (2.1.4). Batches and data sets can be reused in training procedures.
In addition, the convergence condition typically reflects the training goals. For example, reaching a value of the loss function that is below a given loss threshold is a typical convergence condition that implies a satisfactory distance between the model output and ground truth output (e.g., predicted vs. real measurements). Apart from the loss function, convergence conditions may be a function of other additional factors such as the number of loops (i.e., epochs) or batches traversed.
An exemplary embodiment of the present disclosure may include one or more servers (management computing entities), one or more networks, and one or more clients (user computing entities). Each of these components, entities, devices, and systems (similar terms used herein interchangeably) may be in direct or indirect communication with, for example, one another over the same or different wired or wireless networks. Additionally, while
As indicated, in one embodiment, the management computing entity 1402 may also include one or more communications interfaces 1410 for communicating with various computing entities, such as by communicating data, content, and/or information (similar terms used herein interchangeably) that can be transmitted, received, operated on, processed, displayed, stored, and/or the like.
As shown in
In one embodiment, the management computing entity 1402 may further include or be in communication with non-transitory memory (also referred to as non-volatile media, non-volatile storage, non-transitory storage, memory, memory storage, and/or memory circuitry—similar terms used herein interchangeably). In one embodiment, the non-transitory memory or storage may include one or more non-transitory memory or storage media 1406, including but not limited to hard disks, ROM, PROM, EPROM, EEPROM, flash memory, MMCs, SD memory cards, Memory Sticks, CBRAM, PRAM, FeRAM, NVRAM, MRAM, RRAM, SONOS, FJG RAM, Millipede memory, racetrack memory, and/or the like. As will be recognized, the non-volatile (or non-transitory) storage or memory media may store databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like. The term database, database instance, and/or database management system (similar terms used herein interchangeably) may refer to a collection of records or data that is stored in a computer-readable storage medium using one or more database models, such as a hierarchical database model, network model, relational model, entity-relationship model, object model, document model, semantic model, graph model, and/or the like.
In one embodiment, the management computing entity 1402 may further include or be in communication with volatile media (also referred to as volatile storage, memory, memory storage, memory and/or circuitry—similar terms used herein interchangeably). In one embodiment, the volatile storage or memory may also include one or more volatile storage or memory media 1408, including but not limited to RAM, DRAM, SRAM, FPM DRAM, EDO DRAM, SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, RDRAM, TTRAM, T-RAM, Z-RAM, RIMM, DIMM, SIMM, VRAM, cache memory, register memory, and/or the like. As will be recognized, the volatile storage or memory media may be used to store at least portions of the databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like being executed by, for example, the processor 1404. Thus, the databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like may be used to control certain aspects of the operation of the management computing entity 1402 with the assistance of the processor 1404 and operating system.
As indicated, in one embodiment, the management computing entity 1402 may also include one or more communications interfaces 1410 for communicating with various computing entities, such as by communicating data, content, and/or information (similar terms used herein interchangeably) that can be transmitted, received, operated on, processed, displayed, stored, and/or the like. Such communication may be executed using a wired data transmission protocol, such as fiber distributed data interface (FDDI), digital subscriber line (DSL), Ethernet, asynchronous transfer mode (ATM), frame relay, data over cable service interface specification (DOCSIS), or any other wired transmission protocol. Similarly, the management computing entity 1402 may be configured to communicate via wireless external communication networks using any of a variety of protocols, such as general packet radio service (GPRS), Universal Mobile Telecommunications System (UMTS), Code Division Multiple Access 2000 (CDMA2000), CDMA2000 1× (1×RTT), Wideband Code Division Multiple Access (WCDMA), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), Evolved Universal Terrestrial Radio Access Network (E-UTRAN), Evolution-Data Optimized (EVDO), High-Speed Packet Access (HSPA), High-Speed Downlink Packet Access (HSDPA), IEEE 802.11 (Wi-Fi), Wi-Fi Direct, 802.16 (WiMAX), ultra-wideband (UWB), infrared (IR) protocols, near field communication (NFC) protocols, Wibree, Bluetooth protocols, wireless universal serial bus (USB) protocols, and/or any other wireless protocol.
Although not shown, the management computing entity 1402 may include or be in communication with one or more input elements, such as a keyboard input, a mouse input, a touch screen/display input, motion input, movement input, audio input, pointing device input, joystick input, keypad input, and/or the like. The management computing entity 1402 may also include or be in communication with one or more output elements (not shown), such as audio output, video output, screen/display output, motion output, movement output, and/or the like.
As will be appreciated, one or more of the components of the management computing entity 1402 may be located remotely from other management computing entity 1402 components, such as in a distributed system. Furthermore, one or more of the components may be combined and additional components performing functions described herein may be included in the management computing entity 1402. Thus, the management computing entity 1402 can be adapted to accommodate a variety of needs and circumstances. As will be recognized, these architectures and descriptions are provided for exemplary purposes only and are not limiting to the various embodiments.
A user may be an individual, a company, an organization, an entity, a department within an organization, a representative of an organization and/or person, and/or the like.
The signals provided to and received from the transmitter 1504 and the receiver 1506, respectively, may include signaling information in accordance with air interface standards of applicable wireless systems. In this regard, the user computing entity 1502 may be capable of operating with one or more air interface standards, communication protocols, modulation types, and access types. More particularly, the user computing entity 1502 may operate in accordance with any of a number of wireless communication standards and protocols, such as those described above with regard to the management computing entity 1502. In a particular embodiment, the user computing entity 1502 may operate in accordance with multiple wireless communication standards and protocols, such as UMTS, CDMA2000, 1×RTT, WCDMA, TD-SCDMA, LTE, E-UTRAN, EVDO, HSPA, HSDPA, Wi-Fi, Wi-Fi Direct, WiMAX, UWB, IR, NFC, Bluetooth, USB, and/or the like. Similarly, the user computing entity 1502 may operate in accordance with multiple wired communication standards and protocols, such as those described above with regard to the management computing entity 1402 via a network interface 1514.
Via these communication standards and protocols, the user computing entity 1502 can communicate with various other entities using concepts such as Unstructured Supplementary Service Data (USSD), Short Message Service (SMS), Multimedia Messaging Service (MMS), Dual-Tone Multi-Frequency Signaling (DTMF), and/or Subscriber Identity Module Dialer (SIM dialer). The user computing entity 1502 can also download changes, add-ons, and updates, for instance, to its firmware, software (e.g., including executable instructions, applications, program modules), and operating system.
According to one embodiment, the user computing entity 1502 may include location determining aspects, devices, modules, functionalities, and/or similar words used herein interchangeably. For example, the user computing entity 1502 may include outdoor positioning aspects, such as a location module adapted to acquire, for example, latitude, longitude, altitude, geocode, course, direction, heading, speed, universal time (UTC), date, and/or various other information/data. In one embodiment, the location module can acquire data, sometimes known as ephemeris data, by identifying the number of satellites in view and the relative positions of those satellites. The satellites may be a variety of different satellites, including Low Earth Orbit (LEO) satellite systems, Department of Defense (DOD) satellite systems, the European Union Galileo positioning systems, the Chinese Compass navigation systems, Indian Regional Navigational satellite systems, and/or the like. Alternatively, the location information can be determined by triangulating the user computing entity's 1502 position in connection with a variety of other systems, including cellular towers, Wi-Fi access points, and/or the like. Similarly, the user computing entity 1502 may include indoor positioning aspects, such as a location module adapted to acquire, for example, latitude, longitude, altitude, geocode, course, direction, heading, speed, time, date, and/or various other information/data. Some of the indoor systems may use various position or location technologies including RFID tags, indoor beacons or transmitters, Wi-Fi access points, cellular towers, nearby computing devices (e.g., smartphones, laptops), and/or the like. For instance, such technologies may include the iBeacons, Gimbal proximity beacons, Bluetooth Low Energy (BLE) transmitters, NFC transmitters, and/or the like. These indoor positioning aspects can be used in a variety of settings to determine the location of someone or something to within inches or centimeters.
The user computing entity 1502 may also comprise a user interface (that can include a display 1512 coupled to a processor 1508 and/or a user input interface coupled to a processor 1508. For example, the user interface may be a user application, browser, user interface, and/or similar words used herein interchangeably executing on and/or accessible via the user computing entity 1502 to interact with and/or cause display of information from the management computing entity 1402, as described herein. The user input interface can comprise any of a number of devices or interfaces allowing the user computing entity 1502 to receive data, such as a keypad 1514 (hard or soft), a touch display, voice/speech or motion interfaces, or other input device. In embodiments including a keypad 1514, the keypad 1514 can include (or cause display of) the conventional numeric (0-9) and related keys (#, *), and other keys used for operating the user computing entity 1502 and may include a full set of alphabetic keys or set of keys that may be activated to provide a full set of alphanumeric keys. In addition to providing input, the user input interface can be used, for example, to activate or deactivate certain functions, such as screen savers and/or sleep modes.
The user computing entity 1502 can also include volatile storage or memory 1518 and/or non-transitory storage or memory 1520, which can be embedded and/or may be removable. For example, the non-transitory memory may be ROM, PROM, EPROM, EEPROM, flash memory, MMCs, SD memory cards, Memory Sticks, CBRAM, PRAM, FeRAM, NVRAM, MRAM, RRAM, SONOS, FJG RAM, Millipede memory, racetrack memory, and/or the like. The volatile memory may be RAM, DRAM, SRAM, FPM DRAM, EDO DRAM, SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, RDRAM, TTRAM, T-RAM, Z-RAM, RIMM, DIMM, SIMM, VRAM, cache memory, register memory, and/or the like. The volatile and non-volatile (or non-transitory) storage or memory can store databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like to implement the functions of the user computing entity 1502. As indicated, this may include a user application that is resident on the entity or accessible through a browser or other user interface for communicating with the management computing entity 1402 and/or various other computing entities.
In another embodiment, the user computing entity 1502 may include one or more components or functionality that are the same or similar to those of the management computing entity 1402, as described in greater detail above. As will be recognized, these architectures and descriptions are provided for exemplary purposes only and are not limiting to the various embodiments.
The present invention may be implemented in a client server environment.
In some embodiments of the present invention, the entire system can be implemented and offered to the end-users and operators over the Internet, in a so-called cloud implementation. No local installation of software or hardware would be needed, and the end-users and operators would be allowed access to the systems of the present invention directly over the Internet, using either a web browser or similar software on a client, which client could be a desktop, laptop, mobile device, and so on. This eliminates any need for custom software installation on the client side and increases the flexibility of delivery of the service (software-as-a-service) and increases user satisfaction and ease of use. Various business models, revenue models, and delivery mechanisms for the present invention are envisioned, and are all to be considered within the scope of the present invention.
Although an example processing system has been described above, implementations of the subject matter and the functional operations described herein can be implemented in other types of digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
Embodiments of the subject matter and the operations described herein can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described herein can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, information/data processing apparatus. Alternatively, or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, which is generated to encode information/data for transmission to suitable receiver apparatus for execution by an information/data processing apparatus. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices).
The operations described herein can be implemented as operations performed by an information/data processing apparatus on information/data stored on one or more computer-readable storage devices or received from other sources.
The term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing, and grid computing infrastructures.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or information/data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described herein can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input information/data and generating output. Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and information/data from a read only memory or a random-access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive information/data from or transfer information/data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Devices suitable for storing computer program instructions and information/data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, embodiments of the subject matter described herein can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information/data to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser.
Embodiments of the subject matter described herein can be implemented in a computing system that includes a back end component, e.g., as an information/data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a web browser through which a user can interact with an implementation of the subject matter described herein, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital information/data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits information/data (e.g., an HTML page) to a client device (e.g., for purposes of displaying information/data to and receiving user input from a user interacting with the client device). Information/data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any embodiment or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described herein in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.
In some embodiments of the present invention, the entire system can be implemented and offered to the end-users and operators over the Internet, in a so-called cloud implementation. No local installation of software or hardware would be needed, and the end-users and operators would be allowed access to the systems of the present invention directly over the Internet, using either a web browser or similar software on a client, which client could be a desktop, laptop, mobile device, and so on. This eliminates any need for custom software installation on the client side and increases the flexibility of delivery of the service (software-as-a-service), and increases user satisfaction and ease of use. Various business models, revenue models, and delivery mechanisms for the present invention are envisioned, and are all to be considered within the scope of the present invention.
In general, the method executed to implement the embodiments of the invention, may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as “computer program(s)” or “computer code(s).” The computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects of the invention. Moreover, while the invention has been described in the context of fully functioning computers and computer systems, those skilled in the art will appreciate that the various embodiments of the invention are capable of being distributed as a program product in a variety of forms, and that the invention applies equally regardless of the particular type of machine or computer-readable media used to actually effect the distribution. Examples of computer-readable media include but are not limited to recordable type media such as volatile and non-volatile (or non-transitory) memory devices, floppy and other removable disks, hard disk drives, optical disks, which include Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks (DVDs), etc., as well as digital and analog communication media.
One of ordinary skill in the art knows that the use cases, structures, schematics, and flow diagrams may be performed in other orders or combinations, but the inventive concept of the present invention remains without departing from the broader scope of the invention. Every embodiment may be unique, and methods/steps may be either shortened or lengthened, overlapped with the other activities, postponed, delayed, and continued after a time gap, such that every user is accommodated to practice the methods of the present invention.
Although the present invention has been described with reference to specific exemplary embodiments, it will be evident that the various modification and changes can be made to these embodiments without departing from the broader scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than in a restrictive sense. It will also be apparent to the skilled artisan that the embodiments described above are specific examples of a single broader invention which may have greater scope than any of the singular descriptions taught. There may be many alterations made in the descriptions without departing from the scope of the present invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2021/050798 | 9/17/2021 | WO |
Number | Date | Country | |
---|---|---|---|
62706905 | Sep 2020 | US |