METHODS, DEVICES AND SYSTEMS TO DETERMINE AND VISUALIZE BREAST BOUNDARY, PREDICT BRA CUP SIZE AND/OR EVALUATE PERFORMANCE OF A GARMENT USING 4D BODY SCANS OF AN INDIVIDUAL

Information

  • Patent Application
  • 20230394695
  • Publication Number
    20230394695
  • Date Filed
    October 15, 2021
    2 years ago
  • Date Published
    December 07, 2023
    5 months ago
Abstract
Apparatuses, systems and methods for determining the boundary of breasts based on a determined displacement parameter determined at least in part from series of three-dimensional images captured in time while an individual is in motion are provided. The displacement parameter may be determined with respect to a base image. The base image may be one of the 3D images captured while the individual is moving, or a 3D image acquired while the individual is stationary. The displacement may be a vertical displacement. Once the boundary of the breasts is determined, the breasts may be separated from other image data, and the cup size of the individual may be predicted. Apparatuses, systems and methods for evaluating the performance of a garment based on a determined displacement parameter determined at least in part from a series of three-dimensional images captured in time while an individual is in motion are also provided.
Description
BACKGROUND

One most critical anthropometric measurement for product development of bras is breast size which associates with bra cup size. The boundary of the breast(s) is important in determining the breast size. This boundary information, which requires identifying where the fat tissue of the breast ceases, is not available on a 3D scan. Without this information, there is no way to determine the breast volume correctly or predict a breast size. No wonder bra fit has been troubling women for decades and as many as 85% of women wear a wrong-size bra on a daily basis. In addition, an ill-fitted bra can bring health concerns to the wearer. It may cause her back pain, shoulder pain and neck pain.


Currently, it is common practice to find the boundary through physical manipulation of the breasts, such as by pushing the entire breast upward to reveal the folding line. However, for obvious reasons, this method may be unpleasant experience for individuals.


SUMMARY

Accordingly, disclosed is a non-contact method for determining a boundary of breasts. The method may comprise receiving, by a processor, a plurality of three-dimensional (3D) images. The three-dimensional images may be successive 3D images. The 3D images may include the breasts of the same individual. The 3D images may be acquired while the individual is moving. The method may further comprise receiving, by the processor, a three-dimensional (3D) image acquired while the individual is stationary, defining a number of datapoints on the surface of the breasts in the 3D image acquired while the individual is stationary and a number of datapoints on the surface of an alignment region and selecting a subset of the 3D images acquired while the individual is moving. For each selected 3D image in the subset, the method may comprise pre-processing the selected 3D image to at least remove image data outside a predetermined region and rotate the selected 3D image to have each selected 3D image in the same orientation as the 3D image acquired while the individual is stationary, aligning the selected 3D image with respect to the alignment region of the 3D image acquired when the individual is stationary, defining a number of datapoints on the surface of the breasts in the selected 3D image and comparing the selected 3D image with the 3D image acquired when the individual is stationary by determining for each defined datapoint a vertical displacement. The method may further comprise determining, for each defined datapoint, a displacement parameter based on the determined vertical displacement for each 3D image in the subset of 3D images with respect to the 3D image acquired when the individual is stationary for the same defined datapoint, generating a mapping based on the displacement parameter for each defined datapoint and determining the boundary of the breasts using a threshold based on the mapping.


In an aspect of the disclosure, the subset of 3D images may comprise 3D images showing at least one complete gait cycle. The subset of 3D images may also comprise at least a predetermined number of 3D images. In an aspect of the disclosure, the subset of 3D images may comprise 3D images acquired after a preset number of 3D images and before a preset number of 3D images.


In an aspect of the disclosure, the predetermined region may include the torso. In an aspect of the disclosure, the preprocessing may further comprise identifying an underbust level and bust point for each breast and removing image data below the identified underbust level.


In an aspect of the disclosure, the alignment region may be at an upper back area of the individual. In an aspect of the disclosure, the alignment may comprise minimizing a shape discrepancy between each selected 3D image and the 3D image acquired while the individual is stationary by iteratively moving a selected 3D image and calculating the shape discrepancy.


In an aspect of the disclosure, the pre-processing may further comprise determining whether another body part is covering a surface of the breast and torso region and in response to determining that another body part is covering a surface of the breast or torso region, removing image data associated with the another body part and filling in a space corresponding to the removed image data with surface image points predicted for the space to maintain a curvature with a surrounding surface of the breast or maintain the curvature of the torso region.


In an aspect of the disclosure, the pre-processing may further comprise determining a first average value of image points in the predetermined region in a first direction, determining a second average value of image points in the predetermined region in a second direction orthogonal to the first direction and orthogonal to the longitudinal axis of the body; and defining the central axis of the predetermined region as intersecting by the first average value and the second average value and parallel to the longitudinal axis of the body. The first direction may be orthogonal to a longitudinal axis of the individual. The pre-processing may further comprise shifting the selected 3D image such that the central axis intersects an origin.


In an aspect of the disclosure, the vertical displacement may be determined using






d
j
=z
ij
−z
i0


where d a is an array containing the vertical displacements of all the defined datapoints for the jth 3D image, where j is 1≤j≤N, where N is the number of 3D images in the subset, zij is the z-coordinate of the i-th defined datapoint of that jth 3D image (1≤i≤M), where M is the number of defined datapoints, while zi0 is the z-coordinate of the i-th point of the 3D image acquired while the same individual is stationary. In an aspect of the disclosure, the displacement parameter may be a standard deviation.


In an aspect of the disclosure, the threshold may be determined based on a range of the displacement parameters for the datapoints and a preset percentage. For example, the threshold may be determined from an average of the displacement parameters in a first region and subtracting an average of the displacement parameters in a second region and multiplying the preset percentage.


In an aspect of the disclosure, the vertical slices may be parallel to the frontal or coronal plane of the individual. In this aspect, the datapoints on the surface of the breasts may be defined by partitioning, by the processor, each breast into vertical slices and partitioning, by the processor, each vertical slice into a plurality of portions on the surface of the respective breast based on a fixed angular interval. Each portion may correspond to an angle value, and each portion may include a set of points. For each portion on each slice, the processor may determine an average distance among distances of the set of points with respect to one of the associated reference points for a corresponding vertical slice; and set a point associated with the average distance as a datapoint represented by the angle value corresponding to the portion. The datapoint may be one of the number of datapoints identified. When there is an absence of image points for a particular point in any of the vertical slices, a set of undefined values may be assigned to the datapoints for the particular portions. In an aspect of the disclosure, determining of the boundary may further comprise identifying, for each angle having a datapoint between a first angle and a second angle, a vertical slice having the displacement parameter closest to the threshold and identifying a median vertical slice among the identified vertical slices. Datapoints in the posterior direction of the median vertical slice may be removed.


In other aspects of the disclosure, the vertical slices may be parallel to the sagittal plane of the individual. In this aspect, the datapoints on the surface of the breasts may be defined by partitioning, by the processor, each breast into vertical slices, the vertical slices being parallel to the sagittal plane and partitioning, by the processor, each vertical slice into a plurality of portions on the surface of the respective breast based on a fixed interval with respect to a first direction. Each portion may correspond to a specific value in the first direction, and each portion may include a set of points. The first direction may be orthogonal to the longitudinal axis and parallel to the sagittal plane. For each portion on each slice, the processor may determine an average coordinate among coordinates of the set of points for a corresponding vertical slice, the coordinate being in a direction parallel to the longitudinal axis and set a point associated with the average coordinate as a datapoint represented by the specific value corresponding to the portion. The datapoint may be one of the number of datapoints identified. In an aspect of the disclosure, determining of the boundary may further comprise identifying, for each vertical slice, the specific value having the displacement parameter closest to the threshold and identifying a median specific value among the identified specific values. Datapoints in the posterior direction of the median specific value may be removed.


In an aspect of the disclosure, the mapping may be displayed. In an aspect of the disclosure, the method may further comprise removing image data from the 3D image acquired when the individual is stationary based on the threshold and displaying a 3D image of the breasts.


Also disclosed is a non-contact method for predicting a cup size of breasts of an individual. The method may comprise receiving, by a processor, a plurality of three-dimensional (3D) images. The three-dimensional images may be successive 3D images. The 3D images may include the breasts of the same individual. The 3D images may be acquired while the individual is moving. The method may further comprise receiving, by the processor, a three-dimensional (3D) image acquired while the individual is stationary, defining a number of datapoints on the surface of the breasts in the 3D image acquired while the individual is stationary and a number of datapoints on the surface of an alignment region and selecting a subset of the 3D images acquired while the individual is moving. For each selected 3D image in the subset, the method may comprise pre-processing the selected 3D image to at least remove image data outside a predetermined region and rotate the selected 3D image to have each selected 3D image in the same orientation as the 3D image acquired while the individual is stationary, aligning the selected 3D image with respect to the alignment region of the 3D image acquired when the individual is stationary, defining a number of datapoints on the surface of the breasts in the selected 3D image and comparing the selected 3D image with the 3D image acquired when the individual is stationary by determining for each defined datapoint a vertical displacement. The method may further comprise determining, for each defined datapoint, a displacement parameter based on the determined vertical displacement for each 3D image in the subset of 3D images with respect to the 3D image acquired when the individual is stationary for the same defined datapoint, generating a mapping based on the displacement parameter for each defined datapoint and determining the boundary of the breasts using a threshold based on the mapping. The method may further comprise separating the breasts from other parts of the 3D image acquired while the individual is stationary based on the threshold value, defining a number of datapoints on the surface of the breasts in the 3D image acquired while the individual is stationary using horizontal slicing, calculating a shape discrepancy between the breasts in the 3D image acquired while the individual is stationary using the defined datapoints and datapoints in 3D images of breasts associated with known cup sizes, respectively and determining the cup size based on the calculated shape discrepancy for each known cup size. Each 3D image for the known cup sizes may be acquired while a model is stationary.


In an aspect of the disclosure, the method may further comprise at least one of displaying the determined cup size or transmitting the determined cup size to a preset device.


Also disclosed is a non-contact method for evaluating a performance of a garment. The method may comprise receiving, by a processor, a plurality of three-dimensional (3D) images. The 3D images may be successive 3D images. The 3D images may include the breasts of the same individual. The 3D images may be acquired while the individual is moving. The method may further comprise receiving, by the processor, a three-dimensional (3D) image acquired while the individual is stationary, defining a number of datapoints on the surface of the breasts in the 3D image acquired while the individual is stationary and a number of datapoints on the surface of an alignment region and selecting a subset of the 3D images acquired while the individual is moving. For each selected 3D image in the subset, the method may comprise pre-processing the selected 3D image to at least remove image data outside a predetermined region and rotate the selected 3D image to have each selected 3D image in the same orientation as the 3D image acquired while the individual is stationary, aligning the selected 3D image with respect to the alignment region of the 3D image acquired when the individual is stationary, defining a number of datapoints on the surface of the breasts in the selected 3D image and comparing the selected 3D image with the 3D image acquired when the individual is stationary by determining for each defined datapoint a displacement. The method may further comprise determining, for each defined datapoint, a displacement parameter based on the determined displacement for each 3D image in the subset of 3D images with respect to the 3D image acquired when the individual is stationary for the same defined datapoint, generating a mapping based on the displacement parameter for each defined datapoint, identifying areas in the mapping with a displacement parameter greater than a threshold; and generating a report based on the identified areas.


In an aspect of the disclosure, the pre-processing may further comprise determining whether another body part is covering a surface of the breast and torso region and in response to determining that another body part is covering a surface of the breast or torso region, removing image data associated with the another body part and filling in a space corresponding to the removed image data with surface image points predicted for the space to maintain a curvature with a surrounding surface of the breast or maintain the curvature of the torso region.


In an aspect of the disclosure, the datapoints on the surface of the breasts may be defined using horizontal slicing. In an aspect of the disclosure, the datapoints may be define by partitioning, by the processor, the breasts into horizontal slices and partitioning, by the processor, each horizontal slice into a plurality of portions on the surface of the breasts based on a fixed angular interval. Each portion may correspond to an angle value, and each portion may include a set of points. For each portion on each slice, the processor may determine an average distance among distances of the set of points with respect to one of the associated reference points and set a point associated with the average distance as a datapoint represented by the angle value corresponding to the portion. The datapoint may be one of the number of datapoints identified. When there is an absence of image points for a particular point in any of the horizontal slices, a set of undefined values may be assigned to the datapoints for the particular portions.


In an aspect of the disclosure, the displacement may be a horizontal displacement. The horizontal displacement may be determined using






d′
j=√{square root over (xij2+yij2)}−√{square root over (xi02+yi02)}


where d′j is an array containing distances from the associated reference point for the horizontal slice in j-th 3D image, where j is 1≤j≤N, where N is the number of 3D images in the subset xij and yij is the x-coordinate and y-coordinate, respectively, of the i-th datapoint in the j-th 3D image, where 1≤i≤P, where P is the number of defined datapoints, while xi0 and yi0 is the x-coordinate and y-coordinate, respectively, of the i-th datapoint of the 3D image acquired when the individual is stationary. The displacement parameter may be a standard deviation of the calculated horizontal displacement for a given datapoint.


In an aspect of the disclosure, the base image for comparison may be a three-dimensional image acquired while the individual is stationary or one of the three-dimensional images acquired while the individual is moving for determining the boundary of the breasts. In an aspect of the disclosure, a non-contact method for determining a boundary of the breasts may comprise receiving a plurality of three-dimensional (3D) images. In some aspects, the 3D images may be successive 3D images acquired while the individual is moving. The plurality of 3D images may include the breasts of the same individual. In other aspects, one 3D image may be acquired while the individual is stationary. The method may further comprise selecting one 3D image as a base image. The method may further comprise selecting a subset of the 3D images acquired while the individual is moving. For the base image, the method may comprise pre-processing the base image to at least remove image data outside a predetermined region and rotate to a target orientation and defining a number of datapoints on the surface of the breasts and a number of datapoints on the surface of an alignment region. For the remaining selected 3D images in the subset (after the base 3D image may be removed) or the selected 3D images in the subset, the method may comprise pre-processing the 3D image to at least remove image data outside a predetermined region and rotate the 3D image to have each 3D image selected in the same orientation as the base image, aligning the 3D image with respect to the alignment region of the base image, defining a number of datapoints on the surface of the breasts in the 3D image; and comparing the 3D image with the base image by determining for each defined datapoint a vertical displacement. The method may further comprise determining, for each defined datapoint, a displacement parameter based on the determined vertical displacement for each 3D image selected with respect to the base image for the same defined datapoint, generating a mapping based on the displacement parameter for each defined datapoint, and determining the boundary of the breasts using a threshold based on the mapping.


In another aspect of the disclosure, a non-contact method for predicting a cup size of breasts of an individual may comprise receiving a plurality of three-dimensional (3D) images. In some aspects, the 3D images may be successive 3D images acquired while the individual is moving. The plurality of 3D images may include the breasts of the same individual. In other aspects, one 3D image may be acquired while the individual is stationary. The method may further comprise selecting one 3D image as a base image. The method may further comprise selecting a subset of the 3D images acquired while the individual is moving. For the base image, the method may comprise pre-processing the base image to at least remove image data outside a predetermined region and rotate to a target orientation and defining a number of datapoints on the surface of the breasts and a number of datapoints on the surface of an alignment region. For the remaining selected 3D images in the subset (after the base image may be removed) or the selected 3D images in the subset, the method may comprise pre-processing the 3D image to at least remove image data outside a predetermined region and rotate the 3D image to have each 3D image selected in the same orientation as the base image, aligning the 3D image with respect to the alignment region of the base image, defining a number of datapoints on the surface of the breasts in the 3D image; and comparing the 3D image with the base image by determining for each defined datapoint a vertical displacement. The method may further comprise determining, for each defined datapoint, a displacement parameter based on the determined vertical displacement for each 3D image selected with respect to the base image for the same defined datapoint, generating a mapping based on the displacement parameter for each defined datapoint, and determining the boundary of the breasts using a threshold based on the mapping. The method may further comprise separating the breasts from other parts of the 3D image acquired while the individual is stationary based on the threshold value, defining a number of datapoints on the surface of the breasts in the 3D image acquired while the individual is stationary using horizontal slicing, calculating a shape discrepancy between the breasts in the 3D image acquired while the individual is stationary using the defined datapoints and datapoints in 3D images of breasts associated with known cup sizes, respectively and determining the cup size based on the calculated shape discrepancy for each known cup size. Each 3D image for the known cup sizes may be acquired while a model is stationary.


In another aspect of the disclosure, a non-contact method for evaluating a performance of a garment may comprise receiving a plurality of three-dimensional (3D) images. In some aspects, the 3D images may be successive 3D images acquired while the individual is moving. The plurality of 3D images may include the breasts of the same individual. In other aspects, one 3D image may be acquired while the individual is stationary. The method may further comprise selecting one 3D image as a base image. The method may further comprise selecting a subset of the 3D images acquired while the individual is moving. For the base image, the method may comprise pre-processing the base image to at least remove image data outside a predetermined region and rotate to a target orientation and defining a number of datapoints on the surface of the breasts and a number of datapoints on the surface of an alignment region. For the remaining selected 3D images in the subset (after the base image may be removed) or the selected 3D images in the subset, the method may comprise pre-processing the 3D image to at least remove image data outside a predetermined region and rotate the 3D image to have each 3D image selected in the same orientation as the base image, aligning the 3D image with respect to the alignment region of the base image, defining a number of datapoints on the surface of the breasts in the 3D image; and comparing the 3D image with the base image by determining for each defined datapoint a displacement. The method may further comprise determining, for each defined datapoint, a displacement parameter based on the determined displacement for each 3D image selected with respect to the base image for the same defined datapoint, generating a mapping based on the displacement parameter for each defined datapoint, and determining the boundary of the breasts using a threshold based on the mapping. The method may further comprise identifying areas in the mapping with a displacement parameter greater than a threshold; and generating a report based on the identified areas.


Also disclosed is an apparatus or system which may comprise a three-dimensional (3D) image scanner, a memory, a processor and a display. The 3D image scanner may be configured to obtain images of an individual and generate a plurality of 3D images of the individual. The memory may be configured to store image data for each 3D image. The processor may be configured to select a subset of the 3D images. The subset of 3D images may be 3D images acquired while the individual is moving. The processor may also be configured to select a base image. The base image may be a 3D image acquired while the individual is stationary or one of the selected 3D images in the subset. For the base image, the processor may be configured to pre-process the base image to at least remove image data outside a predetermined region and rotate to a target orientation; and define a number of datapoints on the surface of the breasts and a number of datapoints on the surface of an alignment region. For the selected 3D images in the subset or the remaining 3D images in the subset (after the based 3D image may be removed), the processor may be configured to pre-process the 3D image to at least remove image data outside a predetermined region and rotate the 3D image to have each 3D image selected in the same orientation as the base image, align the 3D image with respect to the alignment region of the base image, define a number of datapoints on the surface of the breasts in the 3D image and compare the 3D image with the base image by determining for each defined datapoint a vertical displacement. The processor may also be configured to determine, for each defined datapoint, a displacement parameter based on the determined vertical displacement for each 3D image selected with respect to the base image for the same defined datapoint, generate a mapping based on the displacement parameter for each defined datapoint and determine the boundary of the breasts using a threshold based on the mapping. The display may be configured to display at least the mapping.


In an aspect of the disclosure, the datapoints on the surface of the breasts may be defined using vertical slicing.


In an aspect of the disclosure, the 3D images may be successive 3D images acquired while the individual is moving. In other aspects, the 3D image scanner may be further configured to obtain images while the individual is stationary and generate a three-dimensional image (3D) of the individual.


In an aspect of the disclosure, the 3D image scanner may comprise a plurality of cameras positioned at different locations to cover a 360° view.


In an aspect of the disclosure, the apparatus or system may comprise one or more communication interfaces. In an aspect of the disclosure, the 3D image scanner may be configured to transmit the 3D images to the processor via a communication interface. The communication interface may be wireless. In an aspect of the disclosure, the processor may be configured to transmit the mapping to the display via a communication interface.


In an aspect of the disclosure, the processor may be configured to predict a cup size of the breasts of an individual. In this aspect, the processor may be configured to separate the breasts from other parts of the 3D image acquired while the individual is stationary based on the threshold value, define a number of datapoints on the surface of the breasts in the 3D image acquired while the individual is stationary using horizontal slicing, calculate a shape discrepancy between the breasts in the 3D image acquired while the individual is stationary using the defined datapoints and datapoints in images of breasts associated with known cup sizes, respectively, and predict the cup size based on the calculated shape discrepancy for each known cup size. Each 3D image for the known cup sizes may be acquired with a model is stationary and processed in the same manner as the 3D image acquired while the individual is stationary.


In an aspect of the disclosure, the predicted cup size may be displayed and/or transmitted to a user terminal.


In an aspect of the disclosure, the apparatus or system may further comprise a point of sales terminal and the display may be in the point of sales terminal.


Also disclosed is an apparatus or system which may comprise a three-dimensional (3D) image scanner, a memory, and a processor. The 3D image scanner may be configured to obtain images of an individual and generate a plurality of 3D images of the individual. The memory may be configured to store image data for each 3D image. The processor may be configured to select a subset of the 3D images. The subset of 3D images may be 3D images acquired while the individual is moving. The processor may also be configured to select a base image. The base image may be a 3D image acquired while the individual is stationary or one of the selected 3D images in the subset (or another 3D image). For the base image, the processor may be configured to pre-process the base image to at least remove image data outside a predetermined region and rotate to a target orientation; and define a number of datapoints on the surface of the breasts and a number of datapoints on the surface of an alignment region. For the selected 3D images in the subset or the remaining 3D images in the subset (after the base 3D image may be removed), the processor may be configured to pre-process the 3D image to at least remove image data outside a predetermined region and rotate the 3D image to have each 3D image selected in the same orientation as the base image, align the 3D image with respect to the alignment region of the base image, define a number of datapoints on the surface of the breasts in the 3D image and compare the 3D image with the base image by determining for each defined datapoint a displacement. The processor may also be configured to determine, for each defined datapoint, a displacement parameter based on the determined displacement for each 3D image selected with respect to the base image for the same defined datapoint, generate a mapping based on the displacement parameter for each defined datapoint, identify areas in the mapping with a displacement parameter greater than a threshold and generate a report based on the identified areas.


In an aspect of the disclosure, the apparatus or system may further comprise a display and the report may be displayed on the display. In other aspects, the processor may transmit the report to a user terminal.


In an aspect of the disclosure, the 3D images may be successive 3D images acquired while the individual is moving. In other aspects, the 3D image scanner may be further configured to obtain images while the individual is stationary and generate a three-dimensional image (3D) of the individual.


In an aspect of the disclosure, the 3D images may be acquired while the individual is wearing a garment. In an aspect of the disclosure, the garment may be a sports bra. In an aspect of the disclosure, the 3D images may be also acquired while the individual is nude and the processor may be configured to compare determined displacement when the individual is wearing the garment and when the individual is nude. In an aspect of the disclosure, the report may comprise a percent difference in the displacement when the individual is wearing the garment and when the individual is nude.


In an aspect of the disclosure, the displacement may be a horizontal displacement.


Also disclosed is an apparatus which may comprise a processor and a display. The processor may be configured to receive a plurality of three-dimensional (3D) images and store the 3D images in memory, select a subset of the 3D images and select a base image. The subset of 3D images may be 3D images acquired while the individual is moving. The base image may be one of the selected 3D images in the subset. The base image may be another 3D image acquired while the individual is moving, or 3D image acquired while the individual is stationary. For the base image, the processor may be configured to pre-process the base image to at least remove image data outside a predetermined region and rotate to a target orientation and define a number of datapoints on the surface of the breasts and a number of datapoints on the surface of an alignment region. For the selected 3D images in the subset or the remaining 3D images in the subset (after the base 3D image may be removed), the processor may be configured to pre-process the 3D image to at least remove image data outside a predetermined region and rotate the 3D image to have each 3D image selected in the same orientation as the base image, align the 3D image with respect to the alignment region of the base image, define a number of datapoints on the surface of the breasts in the 3D image and compare the 3D image with the base image by determining for each defined datapoint a vertical displacement. The processor may be further configured to determine, for each defined datapoint, a displacement parameter based on the determined vertical displacement for each 3D image selected with respect to the base image for the same defined datapoint, generate a mapping based on the displacement parameter for each defined datapoint and determine the boundary of the breasts using a threshold based on the mapping. The display may be configured to display at least the mapping.


In an aspect of the disclosure, the processor may be further configured to predict a cup size of breasts of an individual.


Also disclosed is an apparatus which may comprise a processor. The processor may be configured to receive a plurality of three-dimensional (3D) images, store the 3D images in memory, select a subset of the 3D images and select a base image. The subset of 3D images may be 3D images acquired while the individual is moving. The base image may be one of the selected 3D images in the subset. The base image may be another 3D image acquired while the individual is moving, or 3D image acquired while the individual is stationary. For the base image, the processor may be configured to pre-process the base image to at least remove image data outside a predetermined region and rotate to a target orientation and define a number of datapoints on the surface of the breasts and a number of datapoints on the surface of an alignment region. For the selected 3D images in the subset or the remaining 3D images in the subset (after the base 3D image may be removed), the processor may be configured to pre-process the 3D image to at least remove image data outside a predetermined region and rotate the 3D image to have each 3D image selected in the same orientation as the base image, align the 3D image with respect to the alignment region of the base image, define a number of datapoints on the surface of the breasts in the 3D image and compare the 3D image with the base image by determining for each defined datapoint a displacement. The processor may be further configured to determine, for each defined datapoint, a displacement parameter based on the determined displacement for each 3D image selected with respect to the base image for the same defined datapoint, generate a mapping based on the displacement parameter for each defined datapoint, identify areas in the mapping with a displacement parameter greater than a threshold and generate a report based on the identified areas.





BRIEF DESCRIPTION OF THE DRAWINGS

The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.



FIG. 1A illustrates a device in accordance with aspects of the disclosure;



FIG. 1B illustrates a system in accordance with aspects of the disclosure;



FIG. 2 illustrates a method in accordance with aspects of the disclosure;



FIG. 3 illustrates an example of images for one compete gait cycle;



FIG. 4 illustrates a pre-processing method in accordance with aspects of the disclosure;



FIGS. 5A-5C illustrate an example of the pre-processing in accordance with aspects of the disclosure showing arms covering parts of the predetermined section (FIG. 5A), holes caused by data removal (FIG. 5B) and filling in the holes (FIG. 5C) in accordance with aspects of the disclosure;



FIGS. 6A-6E illustrate an example of rotation of an image;



FIG. 7A illustrates an example of the underbust level and bust slice in accordance with aspects of the disclosure;



FIG. 7B illustrates an example of a bust slice and datapoints on the slice and determination of bust points in accordance with aspects of the disclosure;



FIG. 8 illustrates a method of defining the central axis and shifting the image in accordance with aspects of the disclosure;



FIGS. 9A and 9B illustrate an example of a central axis and shifting of the predetermined section in accordance with aspects of the disclosure;



FIG. 10 illustrates a method for defining the surface datapoints in the alignment region in accordance with aspects of the disclosure;



FIG. 11A illustrates an example of the angular increments for the surface datapoints in accordance with aspects of the disclosure;



FIG. 11B illustrates an example of the horizontal slices in the alignment region in accordance with aspects of the disclosure;



FIG. 12A illustrates an example of images offset (not aligned) and FIG. 12B illustrates an example of the images aligned in accordance with aspects of the disclosure based on the alignment region;



FIGS. 13A-13C illustrate an example of a vertical slice, surface datapoints for vertical slices and the reference point(s) for the vertical slices for vertical slicing in accordance with an aspects of the disclosure;



FIG. 14 illustrates a method for defining the surface datapoints on one of the breast regions for the vertical slicing shown in FIGS. 13A-FIG. 13C in accordance with aspects of the disclosure;



FIGS. 15A-15D illustrate an example of vertical slices, surface datapoints for vertical slices and the reference point(s) for the vertical slices for vertical slicing in accordance with other aspects of the disclosure;



FIG. 16 illustrates a method for defining the surface datapoints for the breast region using the vertical slicing shown in FIGS. 15A-15D in accordance with aspects of the disclosure;



FIGS. 17A and 17B illustrate an example of a heat map superposed over the right breast (FIG. 17A) and the left breast (FIG. 17B) in accordance with aspects of the disclosure;



FIG. 18 illustrates an example of a heat map superposed over the breasts and the first area and the second area used to determine a threshold for separation in accordance with aspects of the disclosure;



FIG. 19 illustrates an example of the heat map being affected by bra straps;



FIG. 20 illustrates a method for determining a separation vertical slice in accordance with aspects of the disclosure;



FIG. 21A and FIG. 21B illustrate an example of the separation slice in accordance with aspects of the disclosure;



FIG. 22 illustrates a method for determining a separation y-value in accordance with aspects of the disclosure;



FIGS. 23A-23C illustrate an example of the vertical displacement for three different body types where the images are aligned in the alignment region in accordance with aspects of the disclosure;



FIG. 24 illustrates another method in accordance with aspects of the disclosure;



FIG. 25 illustrates another method in accordance with aspects of the disclosure;



FIGS. 26A and 26B illustrate examples of heat maps generated in accordance with the method illustrated in FIG. 25.





DETAILED DESCRIPTION

In accordance with aspects of the disclosure, a device 10 or system 30 captures a series of three-dimensional images of an individual, while the individual is moving to determine displacement of the breasts. In some aspects of the disclosure, the displacement may then be used to determine boundary information between the breasts and chest wall. The boundary information may also be used to predict a cup size for a bra. The 4D scanning described herein (where the fourth dimension is time), also makes it possible to track the whole breast under motion, therefore, the scans can also provide information regarding the shape change of breasts during physical activity. In some aspect of the disclosure, this information may be used to evaluate the performance of a garment such as a sports bra. In some aspects, the device 10 or system 30 may capture an 3D image of the individual while the individual is stationary. Either the 3D image captured while the individual is stationary (static image) or one of the 3D images captured while the individual is moving (dynamic image) is used as a base image for comparison.


During physical activities, the breasts usually have a time delay in displacement with the chest wall, and this relative displacement in vertical direction causes the bouncing of breasts. Understanding the vertical displacement of breasts may be critical as many studies have shown that the vertical displacement during physical activities is closely related to breast discomfort.



FIG. 1A illustrates a device 10 in accordance with aspects of the disclosure. The device 10 may comprise an imaging device 12, a processor 14, a memory 16 and a display 18. The imaging device 12 may be a 3D scanner. The 3D scanner may be a temporal 3d full body scanner such as 3dMDbody18.t System Model available from 3dMD, LLC. In an aspect of the disclosure, the 3D image is a combination of images obtained from multiple different cameras located at different views. In other aspects, the imaging device 12 may be any array of cameras linked together and synchronized to generate the 3D image(s). The cameras may be in respective mobile phones or other devices such as, but not limited to a tablet or laptop with a camera, where each device is positioned in a stationary position. The stationary positions may be such that the 3D image covers 360 degrees of an individual. These devices may be linked via BLUETOOTH or other wireless connection to transmit the images to one of the devices for reconstruction or generating of the 3D image from the multiple images from the multiple cameras taken at the same time. In other aspects, the imaging device 12 may be multiple web cameras or desktop cameras for a computer. These cameras are available from Logitech. In an aspect of the disclosure, the imaging device 12 may cameras incorporated into a treadmill or other exercise equipment also at different positions. The images from the multiple cameras may be timestamps or synchronized such that a 3D image can be constructed from the multiple images from the multiple cameras acquired while the individual is moving (time series of 3D images can be constructed).


The processor 14 may be, for example, a central processing unit (CPU) or graphic processing unit (GPU) of the device 10, a microprocessor, a system on chip, and/or other types of hardware processing unit. The memory 16 may be configured to store a set of instructions, where the instructions may include code such as source code and/or executable code. The processor 14 may be configured to execute the set of instructions stored in the memory 16 to implement the methods and functions described herein. In some examples, the set of instructions may include code relating to various image processing techniques, encryption and decryption algorithms, slicing (vertical and/or horizontal), displacement calculation, heat map determination and/or other types of techniques and algorithms that can be applied to implement the methods and functions described herein.


The display 18 may be a touchscreen such as on a mobile phone or tablet. The display 18 may also be a screen of a point of sales terminal at a store. In other aspects, the display 18 may be a computer screen or television screen. In accordance with aspects of the disclosure, the display 18 may be configured to display the heat map(s) determined from a displacement parameter, the threshold(s) for the boundary, the separated breasts defined by the boundaries, the predicted cup size and performance evaluation reports as described herein.


In some aspects of the disclosure, the device 10 may perform all the functionality described herein. However, in other aspects, a system 30 comprising multiple devices may collectively perform the functionality. As shown in FIG. 1B, the system 30 may comprise a server 40 and a client 50 (terminal). The client 50 may be configured to obtain the 3D images and transmit the same to the server 40 for processing. The client 50 may comprise the imaging device 12 and the display 18 described above. The client 50 may communicate with the server 40 via a communication interface 55. Although not shown in FIG. 1B, the client device may also comprise a processor and memory. The memory may be for temporally storing the image data prior to transmission to the server 40. The processor may control the imaging device 12 to acquire the 3D images and construct the 3D images from the multiple images from the cameras. This communication interface 55 may be a wireless interface such as a WI-FI® interface. In other aspects, the wireless interface may be BLUETOOTH®. In other aspects, the communication interface 55 may be for wired communication such as Ethernet. The server 40 may comprise the processor 14 and memory 16 described above. Similarly, the server 40 may comprises a communication interface 45 for communicating with the client 50 (terminal).


In some aspects, the server 40 may also have a display. The display may display the heat map(s) determined from a displacement parameter, the threshold(s) for the boundary, the separated breasts defined by the boundaries, the predicted cup size and performance evaluation reports as described herein.


In other aspects, the instead of the client 50 transmitting the image data to the server 40, the client 50 may have a memory card and the images may be stored on the memory card and transferred to the server 40 via removal of the memory card from the client 50 and inserted into the server 40. In some aspects, the client 50 may include multiple DSLRs.


In some aspects, the client 50 may be installed in a fitting room of a store. For example, the client 50 may comprise the multiple cameras which may be installed on a railing system on a wall or door of an individual fitting room. The cameras may be mounted to different portions of the fitting room to have complete coverage, e.g., enable 360 degrees acquisition. The person may be able to raise or lower the client 50 such that the imaging device 12 is aligned with the height of the person (breast height) (via the railing system). The server 40 may also be located within the same store such as a point of sales terminal. In this manner, the individual may be able to be imaged (3D image constructed) in privacy but at the same time, the cup size prediction may be shown to an employee of the store.


In other aspects, the client 50 may be used at home such that the individual may be imaged in the privacy of the home and the server 40 may located at a manufacturer of the garment. The individual may position the multiple cameras around a room such that the 3D image may be constructed from the images from each camera.


In other aspects, the device 10/system 30 may be located at the manufacturer and be used to design and test a garment.


In accordance with aspects of the disclosure, the imaging device 12 acquires a plurality of 3D images while the individual is running. The 3D images include the chest/breast region. The individual may be running in place in front of the imaging device 12. Alternatively, the individual may be running on a treadmill (or another piece of exercise equipment). In some aspects, the imaging device 12 may also acquire a 3D image of the individual when the individual is stationary. For the stationary 3D image, it is preferred that the arms do not cover the breasts. The image data is transmitted (transfers) to the processor 14 (and memory 16). The image data for the 3D image(s) is with respect to a coordinate system of the imaging device 12 (e.g., its origin). In some aspects, the individual may be wearing a bra or another garment while the 3D images are captured. In other aspects, the individual may be nude.



FIG. 2 illustrates a method in accordance with aspects of the disclosure. At S1, the processor 14 may receive the 3D image (3D image acquired with the individual is stationary image) of the individual (also referred as stationary image stationary 3D image or static image). As noted above, the 3D image is a combination of images acquired from different locations by cameras at the same time (e.g., reconstruction or model). The stationary 3D image may be used as the base image for comparison with the 3D images of the individual moving (e.g., running) to determine displacement. In other aspects, S1 may be omitted when one of the 3D images acquired when the individual is moving is used as the based image for comparison. At S3, the processor 14 may receive the plurality of 3D images (acquired at different times) of the individual acquired while the individual was moving (where each images which is combined to obtain a single 3D image is acquired at the same time from multiple different locations).


At S5, a subset of the 3D images of the individual acquired while the individual was moving may be selected for the displacement determination. In an aspect of the disclosure, the processor 14 may automatically select the subset. In some aspects, the number of 3D images selected may be greater than a preset number of 3D images. In some aspects, the preset number is 9. Additionally, since when a person starts to run, movement is not consistent, in an aspect of the disclosure, the selected 3D images are after a second preset number of 3D images, such that the pattern of displacement may be uniform through the gait cycle. In other aspects of the disclosure, the selected 3D images should include representative images from at least one complete gait cycle. In some aspects, the selected 3D images may have a uniform timing between them. FIG. 3 illustrates an example of images for one compete gait cycle. The larger images (first and last) highlight the start of a cycle. The larger image in the center highlights the middle of the cycle. The more 3D images selected, the higher the accuracy of the displacement determination may be; however, it increases the processing time and requires more processing capacity. In other aspects of the disclosure, 3D images from a half of a gait cycle may be selected. However, in this aspect, more 3D images from the half of gait cycle may be selected.


In other aspects of the disclosure, the processor 14, using machine learning, may identify key image positions in the gait cycle and select 3D images based on the identification. In other aspects, a person may manually select the subset of 3D images. In some aspects, one of the selected 3D images may be used as the base image.


Each of the selected 3D images (subset) may be subsequently processed to determine displacement (other than the one 3D image in the subset selected as the base image, if any). S9-S19 is repeated for each of the selected 3D images in the subset, where the iterations may be determined using a counter. For example, at S7 the processor 14 may set a counter to 1 (I=1) and obtain the selected 3D image in the subset.


At S9, the processor 14 performs pre-processing of the 3D image, e.g., series of image pre-processing steps to identify a predetermined section (remove data outside the reason), fill any holes in the predetermined section caused by the removal, match orientation, identify an underbust level (and bust) and define a central axis and shift the image data as needed.



FIG. 4 illustrates an example of the pre-processing of the 3D images in accordance with aspects of the disclosure. At S50, the processor 14 may identify the predetermined section of the 3D image. In some aspects of the disclosure, the predetermined section is the torso region of the body. The torso region includes the chest, the breasts and the stomach. In the identification, the processor 14 may perform data cleaning, such as removing noisy image points, removing the limbs, neck, and head, e.g., removal of image data. Because the individual may be running, the person's arms may be too close to the torso region and cover portion thereof as shown in FIG. 5A. The circles identify the covered portions. When the parts of the body other than the predetermined section of an image are removed, e.g., arms, this may result in holes or data voids in the predetermined section, such as shown in FIG. 5B. The circles in FIG. 5B show the holes. Therefore, at S52 the processor 14 may fill in any holes in the predetermined section caused by the removal. The holes may be filled in my retaining the shape of the object, e.g., breast or abdominal region. For example, the processor 14 may estimate the curvature using the surrounding area and add the data to maintain the estimated curvature. In some aspects, the processor 14 may use a program such as Meshmixer available from Autodesk, Inc., to fill image the holes. For example, the received image data for a 3D image may include a mesh pattern for the surface image points. This mesh pattern may be triangular. One or more triangles in the mesh pattern adjacent to the opens may be selected and the vertices of the selected triangles may be used for the estimation. FIG. 5C shows an example of the 3D image after the holes are filled in.


At S54, the processor 14 may rotate the 3D image to match an orientation of the base image. Where the base image is the stationary image, the rotation is to match the orientation with the stationary image. When one of the 3D images acquired while the person is moving is used as the base image, the rotation would be to match the orientation for the selected 3D image (base image). The rotation may be to match in different views. For example, the 3D images may be rotated such that the images are upright and face frontward. FIGS. 6A-6E illustrate an example of the rotation in the different views. FIG. 6A illustrates an example of a front view of the predetermined section (prior to rotation). The line 600 shows that the central axis is not orthogonal to the ground. In this case, the person is leaning. This is evident by the x coordinates as shown in FIG. 6A. FIG. 6A shows the x-axis direction and the z-axis direction. For purposes of the description, Cartesian coordinates are used (defined by the x-axis, the y-axis, and the z-axis). The x-axis is the transverse or frontal axis, the y-axis is the sagittal axis and the z-axis is the longitudinal axis. However, the coordinate system is not limited to Cartesian and other coordinate systems may be used.


The curved arrow in FIG. 6A indicates the direction of rotation, e.g., clock-wise in this example. FIG. 6B illustrates an example of the rotated image in the front view.



FIG. 6C illustrates a side view of the predetermined section of the 3D image (prior to rotation). FIG. 6C shows the y-axis direction and the z-axis direction. As can be seen from line 600A, the person is leaning frontward. The curved arrow in FIG. 6C indicates the direction of rotation, e.g., clock-wise in this example. FIG. 6D illustrates an example of the rotated image in the side view.



FIG. 6E illustrate a top view of the predetermined section of the 3D image (prior to rotation). FIG. 6E shows the x-axis direction and the y-axis direction. As can be seen from line 600B, the line 600B is not parallel to y=0 (e.g., a target orientation for this view). The curved arrow in FIG. 6C indicates the direction of rotation, e.g., clock-wise in this example.


For the base image, the 3D image may be rotated to a preset orientation. In some aspects, the orientation is facing front or back such that the coronal or frontal plane is orthogonal to the y-axis.


At S56, the processor 14 may define the central axis in the selected 3D image and shift the image if needed. FIG. 8 illustrates a method of defining the central axis and shifting the 3D image in accordance with aspects of the disclosure. In an aspect of the disclosure, the central axis is parallel to the longitudinal axis (orthogonal to the x-y plane). The processor 14 may determine an average with respect to a first direction (e.g., the x-direction). For example, the processor 14 may average the x-components of all the image points of the predetermined section (as processed above) to determine a first average value at S70. At S72, the processor 14 may determine an average with respect to a second direction (e.g., the y-direction). For example, the processor 14 may average the y-components of all the image points of the predetermined section (as processed above) to determine a second average value. The central axis can be defined as an axis intersecting the first average value and the second average value. An example of a central axis 900 is shown in FIG. 9A superposed on a processed 3D image (above the underbust level 700). FIG. 9A shows the y-axis direction and the z-axis direction. As can be seen, the central axis 900 does not interact the origin. For example. as shown in FIG. 9A, the central axis does not intersect Y=0.


At S74, the processor 14 may shift the predetermined section (as processed above) such that the central axis 900 can intersect a reference point, such as the origin. This effectively causes the average values (in the first direction and the second direction) to move to the reference point. For example, the processor 14 may shift the predetermined section (as processed above) horizontally such as shown in FIG. 9B until the central axis 900 is aligned with the x-component and the y-component of a (3D) reference point. In an example, the reference point can be an origin (e.g., coordinates (0, 0, 0)) of the 3D Cartesian coordinate system). Thus, the horizontal shifting can be performed to make the central axis 900 defined by the averaged x- coordinates and y-coordinates of all image points on the predetermined section (as processed above), aligned or coincided with x=0 and y=0. As shown in FIG. 9B, after shifting the central axis 900 intersects Y=0.


In some aspects, the processor 14 may shift the predetermined section (as processed above) such that the underbust line 700 align with the z-component of the reference point (e.g., Z=0). However, since the 3D images are going to be vertically shifted to align with other 3D images, this vertical shifting may be omitted.


After the central axis is defined, the bust points may be determined as a local apex in the 3D image. FIG. 7A shows an example of a bust slice 705 determined from the local apex. The local apex may be defined as having a local maximum y-coordinate (absolute value). The local maximum may be identified by examining the y coordinates for different z-values. In an aspect of the disclosure, the bust points 760 may be determined for the left breast and the right breast at S58. In an aspect of the disclosure, the z-coordinate associated with the local maximum y-coordinate is recorded. Afterwards, the 3D image is sliced with a horizontal slice having the recorded z-coordinate, e.g., the bust slice 705. FIG. 7B illustrates an example of the bust slice 705. Datapoints for the bust slice 750 are defined to identify the x coordinate/y coordinate for the bust points 760. As illustrated in FIG. 7B, there may be a datapoint 750 every preset number of degrees on the bust slice 705. As described above, surface image points near a datapoint may have distances averaged to obtain the value for the datapoint 750. The distances between multiple surface image points and a reference point for the bust slice 755 are determined and averaged to define the value for the datapoint(s) in a similar manner as described below for the alignment region (see FIG. 10). The reference point for the bust slice 755 is the central axis projected onto the slice 705. The bust point 760 for a breast is defined as the datapoint 750 having the longest distance from the reference point 755 of the datapoints for the bust slice 750. FIG. 7B illustrates an example of the bust point 760 for the right breast. The bust point 760 will also be determined for the left breast.


At S60, the processor 14 may identify the underbust level 700. To determine the underbust level 700, in some aspects, the processor 14 may use an immediate crease of a protruded region (defined with respect to the bust point 760/bust slice 705) to identify the underbust level 700 such as shown in FIG. 7A. FIG. 7A shows an example of a left side image. The immediate crease may be identified by examining the y coordinates for the different z-values on the vertical slice parallel to the sagittal plane and passing through the x coordinate of the left bust point (or right bust point), respectively. The starting point for the z-value may be the height of the bust slice 705. The immediate crease may be an inflection point (e.g., point where the y value stops decreasing and then increases as the z value becomes lower). Image data below the underbust level 700 may be removed (e.g., not processed any further).


Once selected 3D image is preprocessed, the processor 14 may define datapoints in an alignment region. In an aspect of the disclosure, the alignment region may be the upper back area. The anterior body and the lower posterior body (including the hip) have more fat tissue which may undergo more displacement and shape change during movement, and thus may not be suitable to serve as the reference portion for the alignment. In addition, shoulders may have relative movement to the ribcage due to arm swing, therefore, the shoulder area may not be ideal for alignment either.


The datapoints are points on the surface of the alignment region of the 3D image. The datapoints may be a fixed number of datapoints P. In some aspect of the disclosure, P=1480, which is 37 datapoints per horizontal slice 1100 in the alignment portion and having 40 slices 1100.



FIG. 10 illustrates a method of determining the values for the datapoints in the alignment region in accordance with aspects of the disclosure using horizontal slicing. The processor 14 may partition the alignment region into S equally distributed horizontal slices at S100. In some examples, the horizontal slices can be orthogonal to the longitudinal axis of the body. The S horizontal slice can be arranged by their z-coordinates, such as from bottom to top or from s=1 to s=S. For example, S may equal 40. However, the number of horizontal slices is not limited to 40 and 40 is for descriptive purposes only. Further, a fixed number of datapoints, such as 37 datapoints, can be identified on each horizontal slice in each image, e.g., 1 datapoint per 5 degrees (there may be a datapoint at both 0° and 180°). However, in other aspects of the disclosure, there may be more datapoints or less datapoint. In other aspects, there may be one datapoint per 10 degrees. Since these datapoints are being used for alignment purposes only, one datapoint per degree may not be needed. The number of slices and datapoints are merely one example and the other numbers (and angles) may be used.


For example, to identify 37 datapoints, the processor 14 may identify the points on a horizontal slice from 0° to 180°, at angle increments of 5°, as shown in a FIG. 11A (which is a top view of FIG. 11B). In other words, starting from 0° to 180°, there may be one datapoint identified at every 5 degrees. For example, the 1st datapoint can be a datapoint i=1 located at the bottommost slice s=1, at the angle of 0°. The 37th datapoint is the datapoint i=37 located on the bottommost slice s=1, at the angle of 180°. The x-, y-, z-coordinates of the datapoints i can be determined by the processor 14 and recorded in sequence ranging from i=1 to i=P, and the recorded locations or coordinates can be stored in the memory 16.


If a certain image point is missing in the alignment region, its coordinates can be defined or replaced by undefined values, such as not-a-number (NaN) values, to hold the space for the datapoint, and to maintain the sequence and indexing of other datapoints among i=1 to i=P. The missing surface points can be a result of the removal of limbs (e.g., arms) during pre-processing (S59). At S102, the processor 14 may initialize a value of s to 1 to begin a sequence to identify the datapoints from the bottommost horizontal slice (s=1). The processor 14 may include a counter to count the processed sliced. In other aspects, the processor may use a pointer or flag to identify the slice.


At S104, the processor 112 may partition or divide the horizontal slice s into a plurality of portions represented by an angle value a (the angle is described above). To improve an accuracy of the x-, y-, z-coordinates of the location of the datapoints, the instructions may define a threshold t corresponding to an angle tolerance of each datapoint within the horizontal slice 1100. In some aspects, the tolerance may be based on the number of datapoints. For example, for an angle value a=40° (and datapoints every 5°), the threshold may be t=2.5.


The processor 14 may partition each horizontal slice s into a plurality of portions based on a fixed angular interval defined by the angle value a and the threshold t. For example, each portion may range from an angle a−t to a+t. For example, the portion represented by the angle a=40° may range from 37.5° to 42.5°, and the portion may include multiple image points. In other aspects, there may be a threshold to average points between horizontal slices 1100 to include in the value for the datapoints. For example, if there is a horizontal slice 1100 every z=5, then the z-tolerance may be +−2.5.


At S106, the processor 14 may initialize the angle value a to a=0°. At S108, the processor 14 may determine distances between images points along the horizontal slice s (at the surface) and a reference point 1110 for the horizontal slice (all image points at the angle and within the tolerance(s)). In an aspect of the disclosure, the reference point 1110 may be (x=0 and y=0 and the z value may change based on the slice). The reference point 1110, per slice, may be the projection of the central axis 900 on the slice. However, since the horizontal slice is two dimensional, the z value does not matter. At S110, the processor 14 may determine an average of all the distances determined at S108. For each portion, the processor 14 may determine the distances of the multiple image points from the reference point 1110 for the slice and determine an average among these determined distances. At S112, the processor 14 may associate the datapoint to have the average distance determined at S110 for the angle value a. The value and associated angle (and slice) may be stored in the memory 16 as the datapoint. For example, for the datapoint associated with a=the processor 14 obtains the distances from image points between 37.5° to 42.5° (and values off-slice within the z-tolerance) and averages the same.


At S114, the processor 14 may determine whether the angle value a is 180° or not. In other aspects, instead of started at zero and moving up to 180° (counterclockwise), the process may start at 180 degrees may decrement to zero. If the angle value a is not 180° (NO at S114), the processor 14 may increment the value of a by M, where M is the angular difference between the datapoints (e.g., 0°+5)°=5° at S113 and the processor 14 may perform subsequently perform S108, S110, S112 and S114 for a next portion in the same horizontal slice. If the angle value a is 180° (YES at S114), the processor 14 may determine whether the horizontal slice s is the fixed number S (e.g., at S116. If the horizontal slice s is not equal to S (NO at S116), then the processor 14 may increment s by one to at S118 and returns to S104 where the processor 14 may subsequently perform S106, S108, S110, S112, and S114 for a next horizontal slice (after S104). If the value of s is S (e.g., 40) at S116, that means all horizontal slices are processed and process may end at S120. In other aspects of the disclosure, the processing may begin with the highest number horizontal slice and work downward instead of beginning with horizontal slice s=1 and worked upward. FIG. 11B illustrates an example of the horizontal slices 1100 for the alignment region. FIG. 11A is a top view of FIG. 11B.


Once the datapoints are defined for the selected 3D image, the processor 14 may shift the selected 3D image to align the selected 3D image with the base image. For example, when the base image is the stationary image, the processor 14 may shift the selected 3D image with respect to the stationary image. For the stationary image, the processor 14 may execute S9 and S11 prior to getting to S13. When one of the selected 3D images in the subset is the base image, and the processing is for the first 3D image (e.g., the base image), S13 may be omitted in the first iteration as it is the base image. To determine the shift, the processor 14 first determines a shape discrepancy between the alignment region in the selected 3D image with the same portion in the base image using the defined datapoint. The shape discrepancy (also referred to herein as fit-loss) may be determined by a point wise comparison (same datapoint) in the respective 3D images.


Since the two 3D images for comparison have been pre-processed as described above (base image and the current selected 3D image), each point's distance from the origin (0,0,0) can be calculated by the following equation (Eq.1):






d
i
=x
i
2
+y
i
2
°z
i
2  (1)


where, (xi, yi, zi) is the coordinates of the i-th datapoint among the scan-surface points (i ranges from 1 to P), and di is the distance of the i-th point from the origin (0, 0, 0) or the reference point. If the coordinates of a point includes undefined (e.g., NaN) values, the distance of that point from the reference point will be recorded as NaN.


Based on the calculated distances from Eq. 1, a shape discrepancy between the pair of 3D images is given by the following equation (Eq.2):










L

(


d

1

,

d

2


)

=


1
m








i
=
1

n




(


d


1
i


-

d


2
i



)

2






(
2
)







where d1, d2 represent two different 3D images, dli refers to the i-th point on the first image (e.g., base image), while d2i refers to the same i-th point on the second image, e.g., current selected 3D image. The variable n is the total number of points, which in the examples described here, is P. Any value subtracting or being subtracted by a NaN value will result in an NaN value, but all the NaN values can be removed by the processor 14 before the addition. The variable m is the total number of pairs of points where both points do not include undefined values.


After the shape discrepancy is calculated, the current selected 3D image is shifted. For example, the current selected 3D image may be shifted vertically. The shifting changes the distance from the surface datapoints and the reference point in the current selected 3D image. The shape discrepancy is then calculated again as described above. The two shape discrepancies are compared. If the latter determined shape discrepancy is smaller than the former, the current selected 3D image is further shifted (as the 3D images are becoming more aligned) and a shape discrepancy is calculated. The process is repeated to minimize the calculated shape discrepancy.


However, if the latter determined shape discrepancy is larger than the former, the shifting stops and may be returned to an earlier position or shifted in the opposite direction and the shape discrepancy is calculated again. FIG. 12A illustrates an example of images offset (not aligned) FIG. 12B illustrates an example of images aligned in accordance with aspects of the disclosure based on the alignment region. FIG. 12A and FIG. 12B show the y-axis direction and the z-axis direction. In FIG. 12A, the alignment region of the 3D image acquired while the individual is moving (dynamic scan) is lower than the alignment region of the 3D image acquired while the individual is stationary (static scan) and needs to be shifted upward to align. As noted above, the static 3D image may be replaced with one of the dynamic scan 3D images as the base image.


Once the current selected 3D image is aligned with the base image, the processor 14 may define the datapoints on the breasts at S15. In an aspect of the disclosure, the datapoint on the breasts (surface) may be defined using vertical slices. In some aspects of the disclosure, each breast may be separately processed to determine the datapoints. In some aspects of the disclosure, the vertical slices may be parallel to the coronal plane (frontal plane), e.g., have the same y value in its coordinate as shown in FIG. 13A. FIG. 13A shows a perspective view of a sliced right breast (breast region) of an individual. A representative vertical slice 1300 is shown in FIGS. 13A-13C. Each vertical slice may be equidistance from each other. The vertical slices may divide only the anterior region of the individual, e.g., y<0 (or y>0 depending on the definition or direction of the y axis). Each of the dots in FIGS. 13A-13C represent a defined datapoint for the right breast (breast region). FIG. 13A shows a reference axis 1310 for the breast region. The reference axis 1310 may be defined by the bust point 760 for the breast and is orthogonal to the x-axis direction and the z-axis direction. The reference axis 1310 in turn defines the reference point 1310A for each processed vertical slice. FIGS. 13B and 13C shown an example of a reference point 1310A for a processed vertical slice. The reference point 1310A for the processed vertical slice may be the bust point 760 projected onto the vertical slice, e.g., processing slice 1300A (e.g., its x and z coordinate projected on a given y coordinate). Slicing the breast region this way allows each vertical slice to be regarded as quasi-circular. As shown in FIG. 13C, 0° and 180° may be defined on the x-axis direction and 90° and 270° may be defined on the z-axis direction. However, other angle definitions may be used.



FIG. 14 illustrates a method of determining the values for the datapoints 1305 in accordance with aspects of the disclosure using vertical slicing as shown in FIGS. 13A-13C. The processor 14 may partition one of the breast regions into T equally distributed vertical slices 1300 at S150. In some aspects of the disclosure, there may be 40 vertical slices. When the vertical slices are parallel to the coronal plane, the T vertical slices can be arranged by their y-coordinates, such as from back to front, e.g., from t=1 to t=T. However, the number of vertical slices is not limited to 40 and 40 is for descriptive purposes only. Further, a fixed number of datapoints, such as 180 datapoints 1305A, can be identified on each vertical slice in each image, e.g., 1 point per 2 degrees. Therefore, in some aspects of the disclosure, there may be a total of 7200 datapoint 1305 (180×40). However, in other aspects of the disclosure, there may be more datapoints or less datapoints. In other aspects, there may be one datapoint per 5 degrees. In other aspects of the disclosure, depending on the boundary determination, e.g., only upper boundary, only datapoints on the upper breast may be determined, e.g., 0° to 180° (when the vertical slices are parallel to the coronal plane) (the determination at S164 is replaced with 180° rather than determining whether back at zero again.)


At S152, the processor 14 may initialize a value of t to 1 to begin a sequence to identify the datapoints 1305A from the innermost vertical slice (t=1) (vertical slice closest to y=0). The processor 14 may include a counter to count the processed vertical sliced. In other aspects, the processor may use a pointer or flag to identify the vertical slice.


At S154, the processor 112 may partition or divide the vertical slice t 1300 into a plurality of portions represented by an angle value a. To improve an accuracy of the x-, y-, z-coordinates of the location of the datapoints 1305A, the instructions may define a threshold tt corresponding to an angle tolerance of each datapoint within the vertical slice. In some aspects, the tolerance may be based on the number of datapoints. For an angle value a=40° and threshold tt=1°, the processor 14 may partition each vertical slice t into a plurality of portions based on a fixed angular interval defined by the angle value a and the threshold tt, e.g., each portion can range from an angle a−tt to a+tt. For example, the portion represented by the angle a=40° can range from 39° to 41°, and the portion can include multiple image points. In other aspects, there may be a threshold to average points between vertical slices 1300 to include in the value for the datapoints 1305A. For example, if there is a vertical slice 1300 every y=5, then the y-tolerance may be +−2.5.


At S156, the processor 14 may initialize the angle value a to a=0°. At S158, the processor 14 may determine distances between all images points along the vertical slice t (at the surface) and a reference point 1310A for the vertical slice (all image points may include image points at the angle and within the angle tolerance, as well as image point off-slice within the y-tolerance).


At S160, the processor 14 may determine an average of all the distances determined at S158. For each portion, the processor 14 may determine the distances of the multiple image points from the reference point 1310A for the vertical slice 1300 and determine an average among these determined distances. At S162, the processor 14 may associate the datapoint 1305A to have the average distance determined at S160 for the angle value a. The value and associated angle (and vertical slice) may be stored in the memory 16 as the datapoint 1305A. For example, the 1st datapoint can be a point i=1 located at the innermost slice t=1, at the angle of 0°. The 90th datapoint is the point i=90 located on the innermost slice t=1, at the angle of 178°. The x-, y-, z-coordinates of the datapoints i can be determined by the processor 14 and recorded in sequence ranging from i=1 to i=Q (Q being the total number of datapoints), and the recorded locations or coordinates can be stored in the memory 16.


If a certain image point is missing on the slice, its coordinates can be defined or replaced by undefined values, such as not-a-number (NaN) values, to hold the space for the datapoint 1305A, and to maintain the sequence and indexing of other datapoints among i=1 to i=Q. The missing points can be a result of the removal of limbs (e.g., arms) during pre-processing (S59) or the optional removal of the protrusion of the upper abdomen between the breasts (e.g., the lower and central area between the two breasts). For example, FIG. 13C shows no datapoints between 270° and 0° for the example vertical slice.


At S164, the processor 14 may determine whether the angle value a is 0° again. In other aspects, instead of started at zero and moving around the quasi-circle back to zero, (counterclockwise), the process may start at 360 degrees move clockwise. If the angle value a is not 0° again (NO at S164), the processor 14 may increment the value of a by N at S165, where N is the angular difference between the datapoints (e.g., 0°+2)°=2° and the processor 112 can perform the S158, S160, S162 and S164 for a next portion in the same vertical slice 1300. If the angle value a is 0° again (YES at S164), the processor 14 may determine whether the vertical slice t is the fixed number T (e.g., 40) at S166. If the vertical slice t is not equal to T (NO at S166), then the processor 14 may increment t by one to at S168 and return to S154, where the processor 14 may subsequently perform S156, S158, S160, S162, and S164 for a next vertical slice (after S154). If the value oft is T (e.g., 40) at S166, that means all vertical slices 1300 are processed and process may end at S170. In other aspects of the disclosure, the processing may begin with the outermost vertical slice and work inward instead of beginning with the innermost and working outward.


In FIGS. 13A-13C the datapoints 1305A appears not to be uniform because the reference point 1310A is off center.


This process may be repeated for the other breast region.


In some aspects of the disclosure, the vertical slices 1300A may be parallel to the sagittal plane, e.g., have the same x value in its coordinate as shown in FIG. 15A and FIG. 15B. FIG. 15A shows two examples of vertical slices 1300A. FIG. 15A also shows an example of the datapoints 1305B defined in accordance with this vertical slicing (parallel to the sagittal plane). Each of the dots in FIGS. 15A-15D represents a defined datapoint. FIG. 15A is a perspective view. FIG. 15B is a top view showing the datapoints 1305B and highlighting the datapoints for the two example vertical slices 1305C shown in FIG. 15A. In this aspect of the disclosure, two breast regions may be processed together (no need to separately process the same). Additionally, in this aspect of the disclosure, datapoints 1305B above the bust slice 705 may be separately processed from datapoints 1305B below the bust slice 705, since for a given y-value on a particular vertical slice there may be two z-values. Each vertical slice 1300A may be equidistance from each other. The vertical slices 1300A may divide only the anterior region of the individual, e.g., y<0 (or y>0 depending on the definition or direction of the y-axis). As shown in the figures a negative y value is in the anterior region.



FIG. 16 illustrates a method of determining the values for the datapoints 1305B in accordance with aspects of the disclosure using vertical slicing (shown in FIGS. 15A-15D). The processor 14 may partition the breast region into T′ equally distributed vertical slices 1300A based on the x-value at S150A. In some aspects of the disclosure, there may be 60 vertical slices dividing both breasts. When the vertical slices are parallel to the sagittal plane, the T′ vertical slices can be arranged by their x-coordinates, such as from left to right, e.g., from t=1 to t=T′. However, the number of vertical slices is not limited to 60 and 60 is for descriptive purposes only. In some aspects of the disclosure, a fixed number of datapoints may be defined per vertical slice. In this aspect of the disclosure, the datapoints are defined by its y coordinate rather than angle. In some aspects, there may be 30 datapoints 1305B per slice. In other aspects, there may be a different number of datapoints for different slices depending on the shape of the region. For example, there may be more datapoints for slices in the middle of the breasts than between the breasts. The number of datapoints per slice is not limited to 30. Other number of datapoints may be used.


At S152, the processor 14 may initialize a value of t to 1 to begin a sequence to identify the datapoints 1305B from the vertical slice (t=1) (vertical slice closest to the left or right). The processor 14 may include a counter to count the processed vertical slice. In other aspects, the processor 14 may use a pointer or flag to identify the vertical slice.


At S154A, the processor 112 may partition or divide the vertical slice t 1300A into a plurality of portions represented by y values. Each portion may be associated with a range of y-values. To improve an accuracy of the x-, y-, z-coordinates of the location of the datapoints (e.g., 1305C for vertical slice A), the instructions may define a threshold t′″ corresponding to an y-value tolerance of each datapoint within the vertical slice 1300A. In some aspects, the tolerance may be based on the number of datapoints for the vertical slice 1300A. The more datapoints 1305B per vertical slice 1300A, the smaller the threshold t′″ may be. For example, if there are datapoints every y=2, the y-value threshold t′″ may be +−1 (or +−0.5). Additionally, the portion may include image data off-slice (image data between the vertical slices 1300A). For example, if there is a vertical slice 1300A every x=5, then the x-tolerance may be+−2.5.


At S156A, the processor 14 may initialize the value to a minimum value (absolute value). For example, the minimum value may be y=0. At S158, the processor 14 may determine the z-coordinate for all images points associated with the portion identified in S154A (at the surface) and for the y value (datapoint that is currently being processed) (all image points may include image points at the specific y value and within the y tolerance, as well as image point off-slice within the x-tolerance) for the specific vertical slice.


At S160, the processor 14 may determine an average of all the z-coordinates determined at S158. For each portion, the processor 14 may determine the z-coordinates of the multiple image points for the vertical slice 1300A and determine an average among these determined z-coordinates. At S162A, the processor 14 may associate the datapoint 1305A to have the average z-coordinate determined at S160 for the y value for the datapoint (and the x value, which is known based on the vertical slice). The average z-coordinate and associated y- value (and vertical slice, e.g., x-value) may be stored in the memory 16 as the datapoint 1305B (e.g., datapoint 1305C for Vertical Slice A). The x-, y-, z-coordinates of the datapoints may be determined by the processor 14 and recorded in sequence ranging from i=1 to i=R, and the recorded locations or coordinates may be stored in the memory 16 (where R is the maximum number of datapoints).


If a certain image point is missing on the vertical slice 1300A, its coordinates can be defined or replaced by undefined values, such as not-a-number (NaN) values, to hold the space for the datapoint 1305B, and to maintain the sequence and indexing of other points among i=1 to i=R. The missing points can be a result of the removal of limbs (e.g., arms) during pre-processing (S59) or the optional removal of the protrusion of the upper abdomen between the breasts (e.g., the lower and central area between the two breasts).


At S164A, the processor 14 may determine whether the y value being processed is the maximum y value (absolute value). This indicates that all portions of the divided vertical slice have been processed. In other aspects, instead of started at y=minimum and moving counterclockwise, the process may start at y=maximum move clockwise (min and max absolute value). If the y value is not the maximum absolute y value (NO at S164A), the processor 14 may increment the value of y by 0 at S165A, where 0 is the y increment between the datapoints and the processor 14 can perform the S158, S160, S162A and S164A for a next portion in the same vertical slice 1300A. If the y value a is the maximum absolute y value (YES at S164A), the processor 14 may determine whether the vertical slice t is the fixed number T′ (e.g., 60) at S166A. If the vertical slice t is not equal to T′ (NO at S166A), then the processor 14 may increment t by one to at S168 and return to S154A. where the processor 14 may subsequently perform S156A, S158, S160, S162A, and S164A for a next vertical slice (e.g., vertical slice B). If the value oft is T′ (e.g., 60) at S166A, that means all vertical slices 1300A are processed and process may end at S170.


In other aspects, the vertical slices any be angled with respect to both the coronal plane and the sagittal plane.


S15 is repeated for the base image (whether the base image is the 3D image acquired while the individual is stationary or one of the 3D images acquired while the individual is moving). In an aspect of the disclosure, the same vertical slicing technique (whether parallel to the coronal plane or sagittal plane or angled) may be used for the base image. This will allow for consistency in the datapoints. This way the base image and the current processed 3D image have the same number of defined datapoints.


At S17, the processor 14 may determine the vertical displacement for the 3D image being processed (3D image acquired while the individual is moving) with respect to the base image. The vertical displacement may be determined for each defined datapoint in S15 (point-wise). The vertical displacement compares the z-coordinates in the defined datapoints (same points in the base image and the current processed 3D image), e.g., relative vertical displacement. The displacement may be determined using the following equation.






d
j
=z
ij
−z
i0  (3)


where dj is an array containing the vertical displacements of all the defined datapoints for the jth 3D image (the current processed 3D image), where j is 1≤j≤N, where N is the number of three-dimensional images in the subset, z 11 is the z-coordinate of the i-th defined datapoint of that jth 3D image (1≤i≤M), where M is the number of defined datapoints, while zi0 is the z-coordinate of the i-th point of the three-dimensional image for the base image.


The displacement array may be stored in the memory 16 (associated with the current processed 3D image).


At S19, the processor 14 may determine if all the 3D images in the subset have been processed. For example, since a counter may be used to track the processed 3D images, the processor may determine whether the counter value equals the number of 3D images \in the subset. When there are unprocessed 3D images in the subset (NO at S19), the processor 14 may increment the counter at S20 and the processor 14 executes S9-S19 for the next 3D image acquired when the individual is moving.


When all the 3D images in the subset are processed (YES at S19), the processor 14 may calculate a displacement parameter at S21. In some aspects of the disclosure, the displacement parameter may be a standard deviation.


Once again, the displacement parameter may be calculated for each defined datapoint in S15.


The displacement parameter may be calculated using the following equation:










S

D

=




Σ

(


d
j

-

d
avg


)

2


n
-
1







(
4
)







where SD represents the standard deviation of dj, davg is the mean value of dj, and n is the number of 3D images in the subset (or if one of the 3D images acquired while the individual is moving is selected as the base image, n is the remaining number of 3D images in the subset).


In an aspect of the disclosure, the processor 14 may generate a mapping, such as a heat map at S23 using the displacement parameter calculated at S21. For example, the heat map may be superposed over the base image (such as the 3D image acquired while the individual was stationary or the one of the 3D images acquired while the individual was moving). Since, the x, y, and z coordinates for each of the datapoints in the breast region were defined in S15 (for the base image as well), the processor 14 knows the position of the corresponding datapoints associated with the determined displacement parameter. In some aspects of the disclosure, the heat map may be displayed on the display 18. In some aspects, the server 40 may transmit the determined heat map to the client 50 to display the heatmap on the display 18 (of the client 50).


In some aspects, the heat map may be presented by gradient colors. The dark blue color may correspond to minimal variability in vertical displacement, whereas the dark red color may correspond to maximal variability in vertical displacement (or vice versa). In other aspects, other colors may be used to differentiate the variability.


In some aspects, the heat map may use a gray scaling to represent that variability in the vertical displacement. For example, dark grey scale image may represent maximal variability whereas light grey scale may represent minimal variability.


In some aspects of the disclosure, the heat map may be separately displayed for the different breast (right and left separately). This may be particularly used where the breasts are separately processed via the vertical slicing (where vertical slices 1300 parallel to the coronal plane are used). FIGS. 17A and 17B illustrate an example of a heat map superposed over the right breast (FIG. 17A) and the left breast (FIG. 17B).


At S25, the processor 14 may determine a threshold for separation of the breasts from the chest (chest wall). In an aspect of the disclosure, the memory 16 may store a preset percentage. The preset percentage may be a fixed percentage of variability. The preset percentage may be multiplied with a difference in the average standard deviation of the displacement in two different areas to determine the threshold. The processor 14 may define the first area 1800 and the second area 1805 at S25. The first area 1800 may be the bust area. This first area 1800 may be associated with the highest variability in the displacement. The bust slice 705 has already been determined. In an aspect of the disclosure, the height of the first area 1800 may be preset, e.g., +−Z value of the bust slice 705. In other aspects, the height of the first area 1800 may be adjusted based on the heat map to keep the highest x percentage displacement within the first area. The width of the first area (e.g., in the x direction may be determined by using a set percentage of the points. For example, that percentage may be the 10% to 90% of the x points. For example, if there are 100 points, then the width may include the middle 80 points (not including the first and last 10 points). The second area 1805 may be a low displacement area. The second area 1805 may exclude the armhole area (which may have high variability of displacement). The width of the second area may be defined by the x-coordinate of the left bust point and right bust point as determined above. Thus, the second area 1805 may be narrower than the first area 1800, an example of which are shown in FIG. 18. The vertical dash lines in FIG. 18 intersect the bust points 760.


Once the areas are defined, the processor 14 may calculate the average of the standard deviation for all datapoints in the first area to obtain “A” and calculate the average of the standard deviation for all datapoints in the second area to obtain “B”. The processor 14 may calculate the difference D (D=A−B). The processor 14 may then determine the threshold by multiplying the predetermined percentage times the difference D. The threshold may identify the upper boundary of the breasts.


The connecting area between the bra cup and the shoulder strip (as circled in FIG. 19) can restrict the upward movement of the breast to a higher extent than other parts of the bra can, resulting in a lower SD value for the relative vertical displacement in that area. This is way the second area 1805 may be narrower than the first area 1800 (e.g., avoiding the shoulder strip area).


At S27, the processor 14 may separate the breasts from the chest using the base image (e.g., the 3D image acquired while the individual was stationary or one of the 3D images acquired while the individual was moving).


In the base image, the image data below the underbust line 700 is already removed, image data in the posterior region (e.g., y>0) (or y<0 depending on the definition or direction of the y-axis) may already be removed. Thus, at S29, the processor may separate the remaining image data using the determined threshold and the standard deviation values for the datapoints (and heat map). For example, in some aspects of the disclosure, datapoints having the standard deviation below the threshold (determined at S25) may be removed and the remaining datapoints identified as breast datapoints. The remaining datapoints may be displayed on the display 18. In some aspects, the server 40 may transmit the 3D image of the breasts to the client 50 to display on the display 18 (of the client 50). In some aspects, the heat map only containing values associated with the breast datapoints may be superposed on the 3D image of the breasts.


In other aspects of the disclosure, the breasts may be separated based on the vertical slices 1300 or y-value separation. The method of determining the separation vertical slice or y value may be different depending on the direction of the vertical slicing. FIG. 20 illustrates a method of determining the separation vertical slice in accordance with aspects of the disclosure. This method may be used when the datapoints are defined with vertical slices 1300 parallel to the coronal plane. In this aspect of the disclosure, the separation slice may be determined for each breast separately.


In accordance with aspects of the disclosure, for angles within a range, the slice (for the angle) having the standard deviation value closest to the threshold determined at S25 (separation) is identified. The identified slices for all the angles within the range are sorted and the median is determined, and the median slice is selected as the separation vertical slice. In some aspects of the disclosure, the angle may be 0° to 90°. However, the maximum angle may change depending on when the individual is wearing a bra with straps or not.


At S200, the processor 14 may set the angle for processing to the first angle. For example, the processor 14 may set the angle to zero. FIG. 21A shows the orientation for the angles. An angle of 0° is parallel to the x-axis. An angle of 90° is parallel to the z-axis.


At S202, the processor 14, for the processing angle (e.g., 0°), identifies all the SD values for each of the vertical slices (e.g., 40) associated with the datapoint. Each of the SD values is compared with the threshold determined in S25. The vertical slice having the SD value closest to the threshold is recorded (for the datapoint for the processing angle) and stored in the memory 16. At S204, the processor 14 may determine if the current processed angle equals the second angle, e.g., 90°. Since this is the first iteration, the determination is NO. The processor 14 may increment the processing angle to the angle associated with the next datapoint at S206. As described above, there may be a datapoint every 2° and thus the processing angle would now be 2°. S202 and S204 may be repeated for each datapoint between the first angle and the second angle. Using the above example, there may be 46 datapoints between the first angle and the second angle.


For example, at 0°, the processor 14 may determine that vertical slice 32 is the closest and at 2°, the processor 14 may determine that vertical slice 36 is the closest to the threshold . . . and at 88°, the processor 14 may determine the vertical slice 40 is the closest to the threshold.


Once all the angles between the first angle and the second angle are processed, the processor 14 may determine that the angle equals the second angle (YES at S204) and the processor 14 sorts all the determined vertical slices per angle in lowest to highest order or vice versa to determine the median vertical slice of all identified slices. The median vertical slice is then identified as the separation slice 2100 at S208. An example of the separation slice 2100 is shown in FIGS. 21A and 21B. In an aspect of the disclosure, when there are an even number of vertical slices, the separation slice 2100 may be defined by taking an average of the two vertical slices closest to a middle number of identified slices. For example, if the middle two identified slices are the 34th and 35th slice. A new slice having a y-coordinate midway between the y-coordinates for the 34th and 35th slice, respectively, may be used as the separation slice.


At S27, the processor 14 may separate the breasts from the chest using the base image using the separation slice 2100.


In other aspects of the disclosure, the breasts may be separated from the chest using a determined y-value. FIG. 22 illustrates a method of determining the separation y value in accordance with aspects of the disclosure. In accordance with this aspect of the disclosure for each vertical slice t=1 to t=T′, the y value closest to the threshold is identified and then sorted by y-value and the median of all identified y values is determined. The median identified y-value is defined as the separation y-value. This method of determining the separation y value may be used when the vertical slicing method uses vertical slices parallel to the sagittal plane.


At S220, the processor 14 may set the vertical slice 1300A for processing to a first slice, e.g., t=1.


At S222, the processor 14, for the processing vertical slice (e.g., t=1), identifies all the SD values for datapoints, respectively associated with a y-value. Each of the SD values is compared with the threshold determined in S25. The y-value associated with the SD value closest to the threshold is recorded and stored in the memory 16 associated with the processing vertical slice. At S224, the processor 14 may determine if the current processed vertical slice equals the maximum number of slices T′, e.g., 60. Since this is the first iteration, the determination is NO. The processor 14 may increment the processing vertical slice to the next vertical slice at S226 (t=t+1). S222 and S224 may be repeated for each vertical slice 1300A. Using the above example, there may be 60 slices and thus, there may be 60 determined y values as being closest to the threshold (one per vertical slice).


For example, at slice X=25, the processor 14 may determine that y value 54 is the closest to the threshold, the processor 14 may determine that vertical slice X=20, the y value 59 is the closest to the threshold . . . and at vertical slice X=0, the processor 14 may determine that y value 39 is the closest to the threshold.


Once all the vertical slices 1300A are processed, the processor 14 may determine that t=T′ (YES at S204) and the processor 14 may sort all the determined y values in lowest to highest order or vice versa to determine the median y value of all identified y values at S228. The median y value is then identified as the separation y value at S230. In an aspect of the disclosure, when there are an even number of defined y-values, the separation y value may be defined by taking an average of the two y-values closest to a middle number of identified y-values. For example, if the middle two identified y-values are 11 and 12, a separation y-value may be a y-value of 11.5.


At S27, the processor 14 may separate the breasts from the chest using the base image using the separation y value.


The separated breasts may be subsequently displayed on the display 18 as described above.



FIGS. 23A-23C illustrate an example of the vertical displacement for three different body types where the 3D images are aligned in the alignment region in accordance with aspects of the disclosure. In each of the figures (FIG. 23A-23C), three different images are shown (e.g., 2301-2303 in FIG. 23A), (2304-2306 in FIG. 23B) and (2307-2309 for FIG. 23C). The observation is of the sagittal plane at the right bust point. As can been seen, the three different body types produce three different vertical displacements.


In other aspects of the disclosure, the time series analysis of the vertical displacement and subsequent threshold determination may be used to generate a cup size recommendation for the individual.



FIG. 24 illustrates an example of a method for generating a cup size recommendation in accordance with aspects of the disclosure. The method uses the previous determined threshold (at S25) to determine how to separate the breast from the chest. At S250, the processor 14 may retrieve the determined threshold from memory 16. At S252, the processor 14 may remove the image data to separate the breast from the chest. In an aspect of the disclosure, this may be done on the 3D image acquired while the individual is stationary. In an aspect of the disclosure, when the generating the cup size recommendation occurs immediately after or shortly after S25/S27, S250 and S252 may be omitted as the breast may be already separated.


At S254, the processor 14 may define the datapoints in the breasts (after separation). In an aspect of the disclosure, the datapoints may be defined using horizontal slicing in a similar manner as described above for defining the datapoints in the alignment region (see FIG. 10). At S100, instead of dividing the alignment region into S horizontal slices, the processor 14 may divide the separated breasts into the horizontal slices. The same or different number of slices may be used. The angle increments for the portions may be different. For example, for each horizontal slice, there may be a datapoint every 2°. Additionally, the initial angle (at S106) and final angle (all angles processed) (at S114) may be different as the breasts are on the anterior of the individual, whereas the alignment region is on the posterior. In some aspects, the initial angle may be 180° and the final angle may be 0° (and increments moving counterclockwise). This angle orientation is consistent with shown in FIG. 11A. In other aspects, the angles may be reorientated since the breasts are only on the anterior portion. Therefore, the angles may be redefined to 0° and 180° or −180° (or any other 180 degrees range). The remaining steps are similar and will not be described again in detail. A description of defining datapoints on the breasts using horizontal slicing is described in PCT application Serial No. US2020/54172 filed Oct. 3, 2020, which is incorporated herein by reference.


In an aspect of the disclosure, the y coordinate for the reference point for each slice may be the y coordinate of the separation slice or the y separation value. In some aspects, the value may be recentered such that it intersects the origin (y=0). In an aspect of the disclosure, the x coordinate for the reference may intersect the central axis 900. The z- coordinate is the z-value of the horizontal slice.


In other aspects of the disclosure, the datapoints defined in S15 (vertical slicing) may be used.


At S256, the processor 14 may determine a shape discrepancy with a model associated with each cup size. In an aspect of the disclose, each cup size has its own model (prototype). The model is a representative size for the specified cup size. The 3D image of each model acquired while the model is stationary may be processed as described above to define the datapoints for the breasts. Additionally, the boundary of the breasts in each model maybe defined as described above.


For each model (cup size), the processor 14 may calculate the shape discrepancy using equations 1 and 2. In equation 2, dli refers to the i-th point on the 3D image for the model for a specific cup size (acquired while the model is stationary), respectively, while d2i refers to the same i-th point on the 3D image acquired while the individual is stationary. Therefore, S256, the processor may determine CS different shape discrepancies, where CS is the number of different cup size models.


After all the shape discrepancies are calculated, the processor 14 may compare the values. Based on the comparison, the processor 14 may identify the lowest shape discrepancy. The lowest shape discrepancy may indicate that the individual breast size is most similar to the model 3D image that resulted in the lowest shape discrepancy.


At S258, the processor 14 may generate a recommendation for the cup size based on the lowest shape discrepancy. In some aspects, the processor 14 may identified the cup size associated with the lowest shape discrepancy and issue a recommendation. In some aspects, the recommendation may be displayed on display 18. In some aspects, where there is a server 40/client 50 relationship, the server 40 may transmit the recommendation to the client 50 via the communication interfaces 45/55, respectively for display on the display 18 at the client 50. For example, when an individual would like to have a size recommendation, the individual may use their own personal devices to acquire the 3D images and transmit the same to the manufacturer for the above processing and recommendation. The manufacturer may perform the above-identified functions and transmit the recommendation back to one of the individual's personal devices. In some aspects, the recommendation may be emailed or texted to the individual. In some aspects, the recommendation may be in the form on a report containing one or more of the scanned 3D images.


In other aspects of the disclosure, displacement between acquired 3D images may be used to evaluate the performance of a garment. FIG. 25 illustrates an example of a method for evaluating the performance of a garment in accordance with aspects of the disclosure. The garment may be a bra, such as a sports bra or a shirt, such as a form fit shirt. A goal of a sports bra is to restrict displacement of the breasts to reduce breast pain and discomfort during physical activities. Additionally, when there is a high displacement, it may be an indication that the bra or garment is too big or does not properly fit. Many of the features illustrated in FIG. 25 are similar as described above and will not be described again in detail. Similar to above, the method may comprise receiving the 3D images (S1 and S3). In this case, the 3D images are acquired while the individual is wearing the same garment, e.g., sports bra. Like above, one of the 3D images is a base image for comparison to determine the displacement. The base image may be a 3D image acquired while the individual is stationary (e.g., S1). Also, the base image may be one of the 3D images acquired while the individual is moving (e.g., S3). If the base image is one of the 3D images acquired while the individual is moving S1 may be omitted.


At S5, a subset of the 3D images acquired while the individual is moving is selected for further processing. When one of the 3D images acquire while the individual is moving is the base image, the selection also includes setting the base image. A description of selecting the subset of 3D images is described above. Features S9-19 are performed for each of the selected 3D images. A counter may be used to track the current 3D image being processed. The counter may be initialized to 1 at S9. When a 3D image acquired while the individual is stationary is the based image S9 and S11, S254A are performed for the 3D image. Since the base image is used for comparison, S13 may be omitted because the other 3D images are aligned with the base image at S13. When one of the 3D images while the individual is moving is set as the base image, S13 and S17A may be omitted for the first iteration (e.g., for the selected 3D image as the base image).


At S254A, the datapoints may be defined for the breasts (breast region). This is performed on each of the 3D images in the subset (and if the 3D image acquired while the individual is stationary is used as the base image, datapoints may be defined on the 3D image acquired while the individual is stationary). Once the 3D images are aligned, the processor 14 may separate the anterior region from the posterior region (at y=0). The breasts are located at the anterior region. Also, in S9 the underbust is identified, image points below the underbust had already been removed. Therefore, S254A the processed 3D image is bound by the underbust in the z direction and y=0 in the y direction. Since the body is divided into posterior and anterior, the anterior portion has 180 degrees. The x-axis may be identified as 0° and 180°, however other angles may be used.


The breasts may be divided using horizontal slicing in a similar manner as described in FIG. 10. Each image is divided into the same number of horizontal slices for comparison. In some aspects, there may be 50 horizontal slices. The distance between each slice may be the same. Each horizontal slice may have a predetermined number datapoints. For example, there may be a datapoint for each degree such that each horizontal slice has 181 datapoints (including both endpoints). Therefore, if there are 50 horizontal slices where each slice as 181 datapoints, there may be a total of 9050 datapoints. The reference may be the central axis 900 projected onto the horizontal slice (for average distance determination as described above). Once the datapoints are defined for a 3D image (and the base image), the processor may determine the displacement at S17A. In this case, instead of vertical displacement (z direction), the displacement, in the x and y direction, is determined. This is to determine the shape change (shape deformation) as the individual is moving such as running. The displacement may be determined using the following equation






d′
j=√{square root over (xij2+yij2)}−√{square root over (xi02+yi02)}  (5)


where d is an array containing the distances from the reference point for each of the datapoints for the j-th 3D image (1≤j≤N), where N is the number of 3D images in the subset (non-base images), xij and yij is the x-coordinate and y-coordinate, respectively, of the i-th point of that 3D image (1≤i≤P), where P is the total number of datapoints while xi0 and yi0 is the x-coordinate and y-coordinate, respectively, of the i-th point of the base image (of the same individual) (1≤i≤P). The defined z-coordinate may be ignored when determining the displacement.


Once all the 3D images in the subset are processed (and the 3D image acquired while the individual is stationary, if used) (YES at S21), the processor 14 may calculate the displacement parameter for each datapoint at S21. Similar to above, the displacement parameter may be a standard deviation calculated using equation 4.


In an aspect of the disclosure, the processor 14 may generate a mapping, such as a heat map using the displacement parameter calculated at S21. For example, the heat map may be superposed over the base image (such as the 3D image acquired while the individual was stationary or the one of the 3D images acquired while the individual was moving).


In some aspects of the disclosure, the heat map may be displayed on the display 18. In some aspects, the server 40 may transmit the determined heat map to the client 50 to display the heat map on the display 18 (of the client 50).


In some aspects, the heat map may be presented by gradient colors. The dark blue color may correspond to minimal variability in displacement (low shape change), whereas the dark red color may correspond to maximal variability in displacement (high shape change) (or vice versa). In other aspects other colors may be used to differentiate the variability.


In some aspects, the heat map may use a gray scaling to represent that variability in the displacement (shape change). For example, dark grey scale image may represent maximal variability whereas light grey scale may represent minimal variability.



FIGS. 26A and 26B illustrate two examples of the heat map. At S300, the processor 300 may identify areas in the heat map with a high level of variability in the displacement. This may be based on a threshold. This threshold may be stored in advance. In an aspect of the disclosure, the threshold may be set by the manufacturer of the garment. This threshold may be an expect variability in the displacement.


In other aspects of the disclosure, the determination may be relative to the individual. For example, the processor 14 may calculate the average of the SD for all datapoints and determine another SD of the average and identify datapoints based on another SD. Looking at the two example heat maps in FIG. 26A and FIG. 26B, one in FIG. 26A has a higher variability in the upper breast region than in FIG. 26B. However, the lower breast region in FIG. 26B has a high variability than in the same region in FIG. 26A.


At S302, the processor 14 may generate an evaluation report based on the analysis of the variability in the heat map. For example, the evaluation report may include recommendations for designing the garment. Using the example in FIG. 26A, the evaluation report may indicate a high level of variability in the upper region of the breasts with a recommendation to include additional support in that region. This indication may include the specific amount of the variability and the datapoints associated with the high variability. Additionally, the indication may include a percentage of the datapoints having the high variability. This may be determined by identifying the total number of datapoints having the variability above the threshold and dividing the same by the total number of breast datapoints (multiplying by 100).


Using the example in FIG. 26B, the evaluation report may indicate a high level of variability in the lower region of the breasts with a recommendation to include additional support in that region.


In some aspects, the analysis may also indicate that the garment does not properly fit. For example, in a case where there is a high level of variability throughout the breasts, this may indicate that the bra does not properly fit.


In some aspects, the method described in FIG. 25 may be used when the manufacturer is designing the garment. The method may be performed for the fit models for each cup size or garment size to confirm the garments performance for any/all sizes.


The heat maps may also be used to observe patterns in the variability of the displacement among different sizes and body shapes. For example, it may be expected that upper breasts have the most variability in shape deformation. However, the pattern may be used to develop variability thresholds. The heat maps may be used to determine the effect of certain materials such as restraints and supports including wires and straps in different areas.


In some aspects of the disclosure, the method described in FIG. 25 may be executed twice for the same individual, once where the individual is nude and a second time where the individual is wearing a garment such as a sports bra. The heat maps that are generated at S23 may be displayed on the display, side-by-side for comparison. In an aspect of the disclosure, the difference between the SD values for each datapoint may be determined and displayed as a separate heat map. In an aspect of the disclosure, areas in the breast may be defined and percent reduction in the variability may be determined for each breast region. The percent reduction may be displayed for each region. In other aspects, the percent reduction may be determined for the whole breast instead of specific regions or areas.


In some aspects, the heat maps may be used for recommendations of postures or changing running styles. For example, the heat map may show an asymmetry of the variability in the displacement in the left and right breasts, which may be caused by asymmetry of the breasts, the running postures, and/or the interaction between the bra and the breasts. This may be impacted by the rotation and sway of the torso during running, and/or the interaction between the bra and the breasts.


In some aspects of the disclosure, aspects of the disclosure may be used to track and evaluate the performance of breast implants or breast reconstruction for surgeons. For example, when one breast is reconstructed, a goal is for it to have similar displacement with the other breast (e.g., bounce or shape movement). In accordance with aspects of the disclosure, the displacement of the reconstructed breast may be compared with the other breast (both vertical displacement and horizontal displacement). The displacement pattern(s) may be displayed as a heat map for the reconstructed and other breast, e.g., side-by-side.


Similarly, when both breasts are reconstructed or have implants, a goal is for them to have similar displacement (e.g., bounce or shape movement). In accordance with aspects of the disclosure, the displacement of the reconstructed breasts may be compared (both vertical displacement and horizontal displacement). The displacement pattern(s) may be displayed as a heat map, e.g., side-by-side. The heat maps may be used to confirm that the movement is substantially the same. Additionally, the heat maps may be used to confirm that the movement is similar to the movement of real breasts.


The terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.


As used herein, the term “processor” may include a single core processor, a multi-core processor, multiple processors located in a single device, or multiple processors in wired or wireless communication with each other and distributed over a network of devices, the Internet, or the cloud. Accordingly, as used herein, functions, features or instructions performed or configured to be performed by a “processor”, may include the performance of the functions, features or instructions by a single core processor, may include performance of the functions, features or instructions collectively or collaboratively by multiple cores of a multi-core processor, or may include performance of the functions, features or instructions collectively or collaboratively by multiple processors, where each processor or core is not required to perform every function, feature or instruction individually.


Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.


The corresponding structures, materials, acts, and equivalents of all means or step plus function elements, if any, in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. Aspects were chosen and described in order to best explain the principles and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims
  • 1. A non-contact method for determining a boundary of breasts comprising: receiving, by a processor, a plurality of three-dimensional (3D) images, the plurality of 3D images being successive 3D images, the plurality of 3D images including the breasts of the same individual, where the 3D images are acquired while the individual is moving;receiving, by the processor, a three-dimensional (3D) image acquired while the individual is stationary;defining a number of datapoints on the surface of the breasts in the 3D image acquired while the individual is stationary and a number of datapoints on the surface of an alignment region;selecting a subset of the 3D images acquired while the individual is moving;for each selected 3D image in the subset,pre-processing the selected 3D image to at least remove image data outside a predetermined region and rotate the selected 3D image to have each selected 3D image in the same orientation as the 3D image acquired while the individual is stationary;aligning the selected 3D image with respect to the alignment region of the 3D image acquired when the individual is stationary;defining a number of datapoints on the surface of the breasts in the selected 3D image; andcomparing the selected 3D image with the 3D image acquired when the individual is stationary by determining for each defined datapoint a vertical displacement;determining, for each defined datapoint, a displacement parameter based on the determined vertical displacement for each 3D image in the subset of 3D images with respect to the 3D image acquired when the individual is stationary for the same defined datapoint;generating a mapping based on the displacement parameter for each defined datapoint; anddetermining the boundary of the breasts using a threshold based on the mapping.
  • 2. The non-contact method of claim 1, wherein the subset of 3D images comprises 3D images showing at least one complete gait cycle and wherein the subset of 3D images comprises at least a predetermined number of 3D images.
  • 3. The non-contact method of claim 1 or claim 2, wherein the subset of 3D images comprises 3D images acquired after a preset number of 3D images and before a preset number of 3D images.
  • 4. The non-contact method of any one of claims 1 to 3, wherein the predetermined region includes the torso and the preprocessing further comprises identifying an underbust level and bust point for each breast, and removing image data below the identified underbust level.
  • 5. The non-contact method of any one claims 1 to 4, wherein the alignment region is at an upper back area of the individual.
  • 6. The non-contact method of claim 5, wherein the aligning comprises minimizing a shape discrepancy between each selected 3D image and the 3D image acquired while the individual is stationary by iteratively moving a selected 3D image and calculating the shape discrepancy.
  • 7. The non-contact method of any one of claims 1 to 6, wherein the defining a number of datapoints on the surface of the breasts comprises: partitioning, by the processor, each breast into vertical slices;partitioning, by the processor, each vertical slice into a plurality of portions on the surface of the respective breast based on a fixed angular interval, wherein each portion corresponds to an angle value, and each portion includes a set of points;for each portion on each slice: determining, by the processor, an average distance among distances of the set of points with respect to one of the associated reference points for a corresponding vertical slice; andsetting, by the processor, a point associated with the average distance as a datapoint represented by the angle value corresponding to the portion, where the datapoint is one of the number of datapoints identified.
  • 8. The non-contact method of claim 7, wherein the defining a number of datapoints on the surface of the breasts further comprises: determining, an absence of image points in particular portions in the vertical slices, wherein the absent image points are removed from the selected 3D image during the pre-processing; andassigning a set of undefined values to the absent image points in the particular portion as datapoints.
  • 9. The non-contact method of any one of claims 1 to 8, wherein the pre-processing further comprises determining whether another body part is covering a surface of the breast and torso region and in response to determining that another body part is covering a surface of the breast or torso region, removing image data associated with the another body part and filling in a space corresponding to the removed image data with surface image points predicted for the space to maintain a curvature with a surrounding surface of the breast or maintain the curvature of the torso region.
  • 10. The non-contact method of any one of claims 1 to 9, wherein the vertical displacement is determined using dj=zij−zi0, where dj is an array containing the vertical displacements of all the defined datapoints for the jth 3D image, where j is 1≤j≤N, where N is the number of 3D images in the subset, zij is the z-coordinate of the i-th defined datapoint of that jth 3D image (1≤i≤M), where M is the number of defined datapoints, while zi0 is the z-coordinate of the i-th point of the 3D image acquired while the same individual is stationary.
  • 11. The non-contact method of claim 10, wherein the displacement parameter is a standard deviation.
  • 12. The non-contact method of claim 7, wherein the vertical slices are parallel to coronal plane.
  • 13. The non-contact method of any one of claims 1 to 12, wherein the threshold is determined based on a range of the displacement parameter and a preset percentage.
  • 14. The non-contact method of claim 13, wherein the threshold is determined by obtained from an average of the displacement parameters in a first region and subtracting an average of the displacement parameters in a second region and multiplying by the preset percentage.
  • 15. The non-contact method of claim 14, wherein the determining of the boundary further comprises identifying, for each angle having a datapoint between a first angle and a second angle, a vertical slice having the displacement parameter closest to the threshold and identifying a median vertical slice among the identified vertical slices.
  • 16. The non-contact method of claim 15, further comprising removing datapoints in the posterior direction of the median vertical slice.
  • 17. The non-contact method of any one of claims 1 to 16, wherein the pre-processing further comprises: determining a first average value of image points in the predetermined region in a first direction, the first direction being orthogonal to a longitudinal axis of the individual;determining a second average value of image points in the predetermined region in a second direction orthogonal to the first direction and orthogonal to the longitudinal axis of the body; anddefining the central axis of the predetermined region as intersecting by the first average value and the second average value and parallel to the longitudinal axis of the body.
  • 18. The non-contact method of claim 17, wherein the pre-processing further comprises shifting the selected 3D image such that the central axis intersects an origin.
  • 19. The non-contact method of any one of claims 1 to 11, wherein the defining a number of datapoints on the surface of the breasts comprises: partitioning, by the processor, each breast into vertical slices, the vertical slices being parallel to the sagittal plane;partitioning, by the processor, each vertical slice into a plurality of portions on the surface of the respective breast based on a fixed interval with respect to a first direction, wherein each portion corresponds to a specific value in the first direction, and each portion includes a set of points, the first direction being orthogonal to the longitudinal axis and parallel to the sagittal plane;for each portion on each slice: determining, by the processor, an average coordinate among coordinates of the set of points for a corresponding vertical slice, the coordinate being in a direction parallel to the longitudinal axis; andsetting, by the processor, a point associated with the average coordinate as a datapoint represented by the specific value corresponding to the portion, where the datapoint is one of the number of datapoints identified.
  • 20. The non-contact method of claim 19, wherein the determining of the boundary further comprises, identifying, for each vertical slice, the specific value having the displacement parameter closest to the threshold and identifying a median specific value among the identified specific values.
  • 21. The non-contact method of claim 20, further comprising removing datapoints in the posterior direction of the median specific value.
  • 22. The non-contact method of any one of claims 1 to 21, further comprising displaying the mapping.
  • 23. The non-contact method of claim 22, further comprising removing image data from the 3D image acquired when the individual is stationary based on the threshold and displaying a 3D image of the breasts.
  • 24. A non-contact method for predicting a cup size of breasts of an individual comprising: receiving, by a processor, a plurality of three-dimensional (3D) images, the plurality of 3D images being successive 3D images, the plurality of 3D images including the breasts of the same individual, where the 3D images are acquired while the individual is moving;receiving, by the processor, a three-dimensional (3D) image acquired while the individual is stationary;defining a number of datapoints on the surface of the breasts in the 3D image acquired while the individual is stationary and a number of datapoints on the surface of an alignment region;selecting a subset of the 3D images acquired while the individual is moving;for each selected 3D image in the subset, pre-processing the selected 3D image to at least remove image data outside a predetermined region in the selected 3D image and rotate the selected 3D image to have each selected 3D image in the same orientation as the 3D image acquired while the individual is stationary;aligning the selected 3D image with respect to the alignment region of the 3D image acquired when the individual is stationary;defining a number of datapoints on the surface of the breasts in the selected 3D image; andcomparing the selected 3D image with the 3D image acquired when the individual is stationary by determining for each defined datapoint a vertical displacement;determining, for each defined datapoint, a displacement parameter based on the determined vertical displacement for each 3D image in the subset of 3D images with respect to the 3D image acquired when the individual is stationary for the same defined datapoint;generating a mapping based on the displacement parameter for each defined datapoint;determining a boundary of the breasts using a threshold value based on the mapping;separating the breasts from other parts of the 3D image acquired while the individual is stationary based on the threshold value;defining a number of datapoints on the surface of the breasts in the 3D image acquired while the individual is stationary using horizontal slicing;calculating a shape discrepancy between the breasts in the 3D image acquired while the individual is stationary using the defined datapoints and datapoints in 3D images of breasts associated with known cup sizes, respectively, each 3D image for the known cup sizes being acquired while a model is stationary; anddetermining the cup size based on the calculated shape discrepancy for each known cup size.
  • 25. The non-contact method of claim 24, further comprising at least one of displaying the determined cup size or transmitting the determined cup size to a preset device.
  • 26. A non-contact method for evaluating a performance of a garment comprising: receiving, by a processor, a plurality of three-dimensional (3D) images, the plurality of 3D images being successive 3D images, the plurality of 3D images including the breasts of the same individual, where the 3D images are acquired while the individual is moving;receiving, by the processor, a three-dimensional (3D) image acquired while the individual is stationary;defining a number of datapoints on the surface of the breasts in the 3D image acquired while the individual is stationary and a number of datapoints on the surface of an alignment region;selecting a subset of the 3D images acquired while the individual is moving;for each selected 3D image in the subset,pre-processing the selected 3D image to at least remove image data outside a predetermined region in the selected 3D image and rotate the selected 3D image to have each selected 3D image in the same orientation as the 3D image acquired while the individual is stationary;aligning the selected 3D image with respect to the alignment region of the 3D image acquired when the individual is stationary;defining a number of datapoints on the surface of the breasts in the selected 3D image; andcomparing the selected 3D image with the 3D image acquired when the individual is stationary by determining for each defined datapoint a displacement;determine, for each defined datapoint, a displacement parameter based on the determined displacement for each 3D image in the subset of 3D images with respect to the 3D image acquired when the individual is stationary for the same defined datapoint;generate a mapping based on the displacement parameter for each defined datapoint;identifying areas in the mapping with a displacement parameter greater than a threshold; andgenerating a report based on the identified areas.
  • 27. The non-contact method of claim 26, wherein the defining a number of datapoints on the surface of the breasts comprises: partitioning, by the processor, the breasts into horizontal slices;partitioning, by the processor, each horizontal slice into a plurality of portions on the surface of the breasts based on a fixed angular interval, wherein each portion corresponds to an angle value, and each portion includes a set of points;for each portion on each slice: determining, by the processor, an average distance among distances of the set of points with respect to one of the associated reference points; andsetting, by the processor, a point associated with the average distance as a datapoint represented by the angle value corresponding to the portion, where the datapoint is one of the number of datapoints identified.
  • 28. The non-contact method of claim 27, wherein the defining a number of datapoints on the surface of the breasts further comprises: determining, an absence of image points in particular portions of the horizontal slices, wherein the absent image points are removed from the 3D image during the pre-processing; andassigning a set of undefined values to the absent image points in the particular portion as datapoints.
  • 29. The non-contact method of any one of claims 26 to 28, wherein the pre-processing further comprises determining whether another body part is covering a surface of the breast or torso region and in response to determining that another body part is covering a surface of the breast or torso region, removing image data associated with the another body part and filling in a space corresponding to the removed image data with surface image points predicted for the space to maintain a curvature with a surrounding surface of the breast or maintain the curvature of the torso region.
  • 30. The non-contact method of any of claims 26 to 29, wherein the displacement is determined using d′j=√{square root over (xij2+yij2)}−√{square root over (xi02+yi02)}
  • 31. The non-contact method of claim 30, wherein the displacement parameter is a standard deviation.
  • 32. A non-contact method for determining a boundary of breasts comprising: receiving, by a processor, a plurality of three-dimensional (3D) images, the plurality of 3D images being successive 3D images, the plurality of 3D images including the breasts of the same individual, where the 3D images are acquired while the individual is moving;selecting a subset of the 3D images acquired while the individual is moving;selecting one of the 3D images as a base image;for the base image: pre-processing the base image to at least remove image data outside a predetermined region and rotate to a target orientation;defining a number of datapoints on the surface of the breasts and a number of datapoints on the surface of an alignment region;for the remaining selected 3D images in the subset or the selected 3D images in the subset: pre-processing the 3D image to at least remove image data outside a predetermined region and rotate the 3D image to have each 3D image selected in the same orientation as the base image;aligning the 3D image with respect to the alignment region of the base image;defining a number of datapoints on the surface of the breasts in the 3D image; andcomparing the 3D image with the base image by determining for each defined datapoint a vertical displacement;determining, for each defined datapoint, a displacement parameter based on the determined vertical displacement for each 3D image selected with respect to the base image for the same defined datapoint;generating a mapping based on the displacement parameter for each defined datapoint; anddetermining the boundary of the breasts using a threshold based on the mapping.
  • 33. A non-contact method of predicting a cup size of breasts of an individual comprising: receiving, by a processor, a plurality of three-dimensional (3D) images, the plurality of 3D images being successive 3D images, the plurality of 3D images including the breasts of the same individual, where the 3D images are acquired while the individual is moving;selecting a subset of the 3D images acquired while the individual is moving;selecting one of the 3D images as a base image; for the base image: pre-processing the base image to at least remove image data outside a predetermined region and rotate to a target orientation;defining a number of datapoints on the surface of the breasts and a number of datapoints on the surface of an alignment region;for the remaining selected 3D images in the subset or the selected 3D images in the subset: pre-processing the 3D image to at least remove image data outside a predetermined region and rotate the 3D image to have each 3D image selected in the same orientation as the base image;aligning the 3D image with respect to the alignment region of the base image;defining a number of datapoints on the surface of the breasts in the 3D image; andcomparing the 3D image with the base image by determining for each defined datapoint a vertical displacement;determining, for each defined datapoint, a displacement parameter based on the determined vertical displacement for each 3D image selected with respect to the base image for the same defined datapoint;generating a mapping based on the displacement parameter for each defined datapoint;determining a boundary of the breasts using a threshold value based on the mapping;separating the breasts from other parts of 3D image acquired while the individual is stationary based on the threshold value;defining a number of datapoints on the surface of the breasts in the 3D image acquired while the individual is stationary using horizontal slicing;calculating a shape discrepancy between the breasts in the 3D image acquired while the individual is stationary using the defined datapoints and datapoints in images of breasts associated with known cup sizes, respectively, each 3D image for the known cup sizes being acquired while a model is stationary; anddetermining the cup size based on the calculated shape discrepancy for each known cup size.
  • 34. A non-contact method for evaluating a performance of a garment comprising: receiving, by a processor, a plurality of three-dimensional (3D) images, the plurality of three-dimensional images being successive 3D images, the plurality 3D images including the breasts of the same individual, where the 3D images are acquired while the individual is moving; selecting a subset of the 3D images acquired while the individual is moving;selecting one of the 3D images as a base image;for the base image: pre-processing the base image to at least remove image data outside a predetermined region and rotate to a target orientation;defining a number of datapoints on the surface of the breasts and a number of datapoints on the surface of an alignment region;for the remaining selected 3D images in the subset or the selected 3D images: pre-processing the 3D image to at least remove image data outside a predetermined region and rotate the 3D image to have each 3D image selected in the same orientation as the base image;aligning the 3D image with respect to the alignment region of the base image;defining a number of datapoints on the surface of the breasts in the 3D image; andcomparing the 3D image with the base image by determining for each defined datapoint a displacement;determining, for each defined datapoint, a displacement parameter based on the determined displacement for each 3D image selected with respect to the base image for the same defined datapoint;generating a mapping based on the displacement parameter for each defined datapoint;identifying areas in the mapping with a displacement parameter greater than a threshold; andgenerating a report based on the identified areas.
  • 35. An apparatus or system comprising: a three-dimensional (3D) image scanner configured to obtain images of an individual and generate a plurality of 3D images of the individual;a memory configured to store image data for each 3D image;a processor configured to: select a subset of the 3D images, the subset of 3D images being 3D images acquired while the individual is moving;select a base image, the base image being a 3D image acquired while the individual is stationary or one of the selected 3D images in the subset;for the base image, the processor is configured to: pre-process the base image to at least remove image data outside a predetermined region and rotate to a target orientation; anddefine a number of datapoints on the surface of the breasts and a number of datapoints on the surface of an alignment region;for the selected 3D images in the subset or the remaining 3D images in the subset, the processor is configured to: pre-process the 3D image to at least remove image data outside a predetermined region and rotate the 3D image to have each 3D image selected in the same orientation as the base image;align the 3D image with respect to the alignment region of the base image;define a number of datapoints on the surface of the breasts in the 3D image; andcompare the 3D image with the base image by determining for each defined datapoint a vertical displacement;determine, for each defined datapoint, a displacement parameter based on the determined vertical displacement for each 3D image selected with respect to the base image for the same defined datapoint;generate a mapping based on the displacement parameter for each defined datapoint; anddetermine the boundary of the breasts using a threshold based on the mapping; anda display configured to display at least the mapping.
  • 36. The apparatus or system of claim 35, wherein the plurality of 3D images are successive 3D images acquired while the individual is moving.
  • 37. The apparatus or system of claim 35 or claim 36, wherein the 3D image scanner is further configured to obtain images while the individual is stationary and generate a three-dimensional image (3D) of the individual.
  • 38. The apparatus or system of any one of claims 35 to 37, wherein the 3D image scanner comprises a plurality of cameras positioned at different locations to cover a 360° view.
  • 39. The apparatus or system of any one of claims 35 to 38, further comprising a first communication interface and the 3D image scanner is configured to transmit the 3D images to the processor using the communication interface.
  • 40. The apparatus or system of claim 39, further comprising a second communication interface, wherein the processor is configured to transmit the mapping to the display via the second communication interface.
  • 41. The apparatus or system of claim 37, wherein the processor is further configured to predict a cup size of breasts of an individual.
  • 42. The apparatus or system of claim 41, wherein the processor is further configured to: separate the breasts from other parts of the 3D image acquired while the individual is stationary based on the threshold value;define a number of datapoints on the surface of the breasts in the 3D image acquired while the individual is stationary using horizontal slicing;calculate a shape discrepancy between the breasts in the 3D image acquired while the individual is stationary using the defined datapoints and datapoints in images of breasts associated with known cup sizes, respectively, each 3D image for the known cup sizes being acquired with a model is stationary; andpredict the cup size based on the calculated shape discrepancy for each known cup size.
  • 43. The apparatus or system of claim 42, wherein the processor is configured to display the predicted cup size on the display.
  • 44. The apparatus or system of claim 42, wherein the processor is configured to transmit the predicted cup size to a user terminal.
  • 45. The apparatus or system of any one of claims 35 to 42, further comprising a point of sales terminal and the display in the point of sales terminal.
  • 46. The apparatus or system of claim 35, wherein the datapoints on the surface of the breasts are defined using vertical slicing.
  • 47. An apparatus or system comprising: a three-dimensional (3D) image scanner configured to obtain images of an individual and generate a plurality of 3D images of the individual;a memory configured to store image data for each 3D image;a processor configured to: select a subset of the 3D images, the subset of 3D images being 3D images acquired while the individual is moving;select a base image, the base image being a 3D image acquired while the individual is stationary or one of the selected 3D images in the subset; for the base image, the processor is configured to:pre-process the base image to at least remove image data outside a predetermined region and rotate to a target orientation; anddefine a number of datapoints on the surface of the breasts and a number of datapoints on the surface of an alignment region;for the selected 3D images in the subset or the remaining 3D images in the subset, the processor is configured to: pre-process the 3D image to at least remove image data outside a predetermined region and rotate the 3D image to have each 3D image selected in the same orientation as the base image;align the 3D image with respect to the alignment region of the base image;define a number of datapoints on the surface of the breasts in the 3D image; andcompare the 3D image with the base image by determining for each defined datapoint a displacement;determine, for each defined datapoint, a displacement parameter based on the determined displacement for each 3D image selected with respect to the base image for the same defined datapoint;generate a mapping based on the displacement parameter for each defined datapoint;identify areas in the mapping with a displacement parameter greater than a threshold; andgenerate a report based on the identified areas.
  • 48. The apparatus or system of claim 47, wherein the processor is further configured to display the report or transmit the report.
  • 49. The apparatus or system of claim 47 or claim 48, wherein the plurality of 3D images are successive 3D images acquired while the individual is moving.
  • 50. The apparatus or system of any one of claims 47 to 49, wherein the 3D image scanner is further configured to obtain images while the individual is stationary and generate a three-dimensional image (3D) of the individual.
  • 51. The apparatus or system of any one of claims 47 to 50, wherein the 3D images are acquired while the individual is wearing a garment.
  • 52. The apparatus or system of claim 51, wherein the garment is a sports bra.
  • 53. The apparatus or system of claim 51 or claim 52, wherein the 3D images are acquired while the individual is nude and wherein the processor is configured to compare determined displacement when the individual is wearing the garment and when the individual is nude.
  • 54. The apparatus or system of claim 53, wherein the report comprising a percent difference in the displacement when the individual is wearing the garment and when the individual is nude.
  • 55. The apparatus or system of any of claims 47 to 54, wherein the displacement is a horizontal displacement.
  • 56. An apparatus comprising: a processor configured to: receive a plurality of three-dimensional (3D) images and store the 3D images in memory;select a subset of the 3D images, the subset of 3D images being 3D images acquired while the individual is moving;select a base image, the base image being a 3D image acquired while the individual is stationary or one of the selected 3D images in the subset;for the base image, the processor is configured to: pre-process the base image to at least remove image data outside a predetermined region and rotate to a target orientation; anddefine a number of datapoints on the surface of the breasts and a number of datapoints on the surface of an alignment region;for the selected 3D images in the subset or the remaining 3D images in the subset, the processor is configured to:pre-process the 3D image to at least remove image data outside a predetermined region and rotate the 3D image to have each 3D image selected in the same orientation as the base image;align the 3D image with respect to the alignment region of the base image;define a number of datapoints on the surface of the breasts in the 3D image; andcompare the 3D image with the base image by determining for each defined datapoint a vertical displacement;determine, for each defined datapoint, a displacement parameter based on the determined vertical displacement for each 3D image selected with respect to the base image for the same defined datapoint;generate a mapping based on the displacement parameter for each defined datapoint;determine the boundary of the breasts using a threshold based on the mapping; anda display configured to display at least the mapping.
  • 57. The apparatus of claim 56, wherein the processor is further configured to predict a cup size of breasts of an individual.
  • 58. An apparatus comprising: a processor configured to: receive a plurality of three-dimensional (3D) images and store the 3D images in memory;select a subset of the 3D images, the subset of 3D images being 3D images acquired while the individual is moving;select a base image, the base image being a 3D image acquired while the individual is stationary or one of the selected 3D images in the subset;for the base image, the processor is configured to:pre-process the base image to at least remove image data outside a predetermined region and rotate to a target orientation; anddefine a number of datapoints on the surface of the breasts and a number of datapoints on the surface of an alignment region;for the selected 3D images in the subset or the remaining 3D images in the subset, the processor is configured to: pre-process the 3D image to at least remove image data outside a predetermined region and rotate the 3D image to have each 3D image selected in the same orientation as the base image;align the 3D image with respect to the alignment region of the base image;define a number of datapoints on the surface of the breasts in the 3D image; andcompare the 3D image with the base image by determining for each defined datapoint a displacement;determine, for each defined datapoint, a displacement parameter based on the determined displacement for each 3D image selected with respect to the base image for the same defined datapoint;generate a mapping based on the displacement parameter for each defined datapoint;identify areas in the mapping with a displacement parameter greater than a threshold; and generate a report based on the identified areas.
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit of and priority to U.S. Provisional Application Ser. No. 63/094,985 filed on Oct. 22, 2020, the entirety of which is incorporated by reference.

PCT Information
Filing Document Filing Date Country Kind
PCT/US2021/055179 10/15/2021 WO
Provisional Applications (1)
Number Date Country
63094985 Oct 2020 US