This application claims priority to Taiwan Application Serial Number 112118822, filed on May 19, 2023, which is herein incorporated by reference in its entirety.
The present disclosure relates to a fabric image processing device and method.
In present textile technologies, the technique of simulating a virtual fabric (or cloth) in a three-dimensional space is often adopted, so as to observe drapes and motions of the virtual fabric in the three-dimensional space. This may usually require measuring physical deformation parameters (e.g., bending works in the warp/weft direction, stretch rates in the warp/weft/oblique direction) of the fabric, so as to perform further simulations. However, it may take a long time and a large amount of manpower to measure physical deformation parameters of the fabric, where specific testing machines may also be required. Therefore, to get physical deformation parameters of the fabric without measuring them is a problem that people with ordinary skills in the art want to resolve.
The present disclosure provides a fabric image processing device, comprising an image capturing circuit, a memory and a processor. The image capturing circuit is configured to capture a first top view image and a first side view image of a first fabric. The memory is configured to store a first physical deformation parameter corresponding to the first fabric. The processor is coupled to the image capturing circuit and the memory, and is configured to execute a first neural network model, a second neural network model, a first linear regression model, a second linear regression model, and a third linear regression model.
The processor is configured to execute the following operations: performing a matting algorithm on the first top view image and the first side view image, respectively, so as to generate a first top view silhouette image and a first side view silhouette image; updating the first neural network model and the first linear regression model according to the first top view silhouette image and the corresponding first physical deformation parameter, and updating the second neural network model and the second linear regression model according to the first side view silhouette image and the corresponding first physical deformation parameter; inputting the first top view silhouette image and the first side view silhouette image to the first neural network model and the second neural network model, respectively, so as to generate a first output vector and a second output vector; and concatenating the first output vector and the second output vector to generate a concatenated vector, and updating the third linear regression model according to the concatenated vector and the corresponding first physical deformation parameter.
The present disclosure provides a fabric image processing method, comprising: capturing a first top view image and a first side view image of a first fabric, and performing a matting algorithm on the first top view image and the first side view image, respectively, so as to generate a first top view silhouette image and a first side view silhouette image; updating a first neural network model and a first linear regression model according to the first top view silhouette image and a corresponding first physical deformation parameter, and updating a second neural network model and a second linear regression model according to the first side view silhouette image and the corresponding first physical deformation parameter; inputting the first top view silhouette image and the first side view silhouette image to the first neural network model and the second neural network model, respectively, so as to generate a first output vector and a second output vector; and concatenating the first output vector and the second output vector to generate a concatenated vector, and updating a third linear regression model according to the concatenated vector and the corresponding first physical deformation parameter.
It is to be understood that both the foregoing general description and the following detailed description are by examples, and are intended to provide further explanation of the disclosure as claimed.
Reference will now be made in detail to the present embodiments of the disclosure, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the description to refer to the same or like parts.
Refer to
In general, a complicated process may be required to measure physical deformation parameters PDP of the fabric, where the physical deformation parameters PDP may include a bending work in the warp direction BWR, a bending work in the weft direction BWF, a stretch rate in the warp direction SWR, a stretch rate in the weft direction SWF and a stretch rate in the oblique direction SO.
More specifically, the bending work in the warp direction BWR is a work required for bending the fabric against/along the warp direction, and a unit of the bending work in the warp direction BWR is a product of a gram-force, a bending length and a bending angle (gf×mm×rad). The bending work in the weft direction BWF is a work required for bending the fabric against/along the weft direction, and the unit of bending work in the weft direction BWF is also the product of the gram-force, the bending length and the bending angle.
In addition, the stretch rate in the warp direction SWR is defined according to a strip method of a breaking strength in the CNS 12915 L3233-2010 standard, which is a measurement of a stretch rate while stretching the fiber in a constant speed CRE with a fixed weight of 500 gw of load along the warp direction of the fiber, and the unit of the stretch rate in the warp direction SWR is percentage (%). The stretch rate in the weft direction SWF is defined according to the strip method of the breaking strength in the CNS 12915 L3233-2010 standard as a stretch rate while stretching the fiber in a constant speed CRE with a fixed weight of 500 gw of load along the weft direction of the fiber, and the unit of the stretch rate in the weft direction SWF is percentage (%). The stretch rate in the oblique direction SO is defined according to the strip method of the breaking strength in the CNS 12915
L3233-2010 standard as a stretch rate while stretching the fiber in a constant speed CRE with a fixed weight of 500 gw of load along the oblique direction (the direction with an angle of 45 degrees to the weft direction) of the fiber, and the unit of the stretch rate in the oblique direction SO is percentage (%). In other words, each of the bending work in the warp direction BWR, the bending work in the weft direction BWF, the stretch rate in the warp direction SWR, the stretch rate in the weft direction SWF and the stretch rate in the oblique direction SO is a numerical data, respectively.
The aforementioned stretch rate in the warp direction SWR, stretch rate in the weft direction SWF and stretch rate in the oblique direction SO may all be computed as Formula 1.
In formula 1, the clamping distance is a distance between two clamping positions on the fabric, and the stretching displacement is a stretching length of the fabric under a fixed weight of 500 g of load.
However, it may be known from above that since physical deformation parameters PDP such as the bending work in the warp direction BWR, the bending work in the weft direction BWF, the stretch rate in the warp direction SWR, the stretch rate in the weft direction SWF and the stretch rate in the oblique direction SO may only be measured by applying a constant force on the fabric and then measuring a variety of quantities of deformation of the fabric. As a result, the whole measuring process becomes complicated and lengthy.
Currently, the measuring process of physical deformation parameters PDP on an ordinary fabric simulation device is complicated and lengthy. As a consequence, the efficiency of the simulation is poor. To overcome the aforementioned problems, the present disclosure provides a fabric detection technology, where a top view image and a side view image corresponding to a fabric are transformed into a top view silhouette image and a side view silhouette image, and then neural network (NN) models and the linear regression models are trained through the top view silhouette image, the side view silhouette image, and the pre-measured physical deformation parameter PDP corresponding to the fabric.
As a result, if a new fabric is to be simulated, a new side view image and a new top view image corresponding to a new fabric may be acquired beforehand, so as to transform the new side view image and the new top view image into a new top view silhouette image and a new side view silhouette image, and thereby utilize the trained neural network model and the trained linear regression model to transform the new top view silhouette image and the new side view silhouette image into a new physical deformation parameter.
Therefore, through the aforementioned detection method, various drapes and motions of a new virtual fabric in the three-dimensional space 3DS may be simulated according to new physical deformation parameters. For a new fabric, a new physical deformation parameter may be recognized rapidly through a new top view silhouette image and a new side view silhouette image, so as to simulate the new fabric without the complicated and lengthy measurement of the new physical deformation parameter. The aforementioned technology of the present disclosure may be specified by following embodiments.
Refer also to
In some embodiments, the fabric image processing device 100 may be implemented with a computer, a server, or a processing center, etc. In some embodiments, the image capturing circuit 110 may be a photographic circuit (e.g., digital single-lens reflex camera (DSLR), digital video camera (DVC), or near-infrared camera (NIRC)) configured to shoot pictures of a fabric (e.g., a piece of cloth). In some embodiments, the memory 120 may be implemented with a memory unit, a flash memory, a read-only memory (ROM), a hard-disk drive (HDD) or any other equivalent storage devices. In some embodiments, the processor 130 may be implemented with a processing unit, a central processing unit (CPU), or a computing unit.
In the present embodiment, the memory 120 stores a physical deformation parameter corresponding to a fabric beforehand. In some embodiments, a detector machine (e.g., a machine configured to measure deformations (bending and extending) of the fabric) may be utilized to perform extending and bending tests on the fabric, so as to generate the physical deformation parameter, and the detector machine may send the physical deformation parameter corresponding to the fabric directly to the memory 120 to store the physical deformation parameter. It is worth mentioning that while only one physical deformation parameter corresponding to a fabric is taken as an example therein, in practical implementations, the memory 120 may further store a plurality of physical deformation parameters corresponding to a plurality of fabrics.
In the present embodiment, as shown in
In some embodiments, the first linear regression model LRM1, the second linear regression model LRM2, and the third linear regression model LRM3 may all be regression output layers. In some embodiments, the memory 120 may be configured to store parameters corresponding to the first neural network model NNM1, the second neural network model NNM2, the first linear regression model LRM1, the second linear regression model LRM2, and the third linear regression model LRM3, respectively, where those parameters may be an average according to the training history, a set of predetermined values, or random values.
Further embodiments are given to describe how to train neural network models and linear regression models according to transformations of a top view image and a side view image corresponding to the fabric, and the pre-measured physical deformation parameter corresponding to the fabric.
Refer also to
In some embodiments, the image capturing circuit 110 may shoot the first fabric through a side view and a top view to generate the first top view image and the first side view image, respectively. In some embodiments, the matting algorithm may be a clustering algorithm (e.g., a k-means algorithm) or other algorithms configured to recognize edges of objects. In some embodiments, the image capturing circuit 110 may shoot images of the first fabric put in a green screen scene or other scenes through the side view and the top view, so as to generate the first top view image and the first side view image, respectively. In some embodiments, the processor 130 perform image background removal and cutting (removing a part of background) on the first top view image and the first side view image (e.g., with a size of 4232×3204 pixels) through the matting algorithm, so as to generate the first top view silhouette image and the first side view silhouette image (e.g., with a size of 224×224 pixels), where the first top view silhouette image and the first side view silhouette image may both be black and white images, where the white region of a black and white image may be an object region of the first fabric, and the black region of the black and white image may be a background region.
In the following paragraphs, practical examples are given to describe the first top view image, the first side view image, the first top view silhouette image, and the first side view silhouette image. Refer also to
In the first top view image PIC1, a disk PLK may be placed above the first fabric FB (along the Z-direction), and a needle ND1 may be set along the horizontal direction (along the XY plane) above the first fabric FB and the disk PLK. In the first side view image PIC2, the first fabric FB may be placed on the bracket BS (the first fabric FB may drape on the bracket BS, and the disk PLK is configured to hold the non-draping part of the first fabric FB), while a needle ND2 may be set along the vertical direction (along the Z-direction) below the first fabric FB. In addition, the needle ND1 may be parallel to the weft direction of the fiber of the first fabric FB, while the needle ND2 may be parallel to the warp direction of the fiber of the first fabric FB. In other words, the needle D1 may indicate the weft direction of the first fabric FB, while the needle ND2 may indicate the warp direction of the first fabric FB.
Refer also to
Moreover, as shown in
In some embodiments, the processor 130 may take the first top view silhouette image and the first physical deformation parameter as a training sample and a training label, respectively, and then use the training sample and the training label to update parameters of the first neural network model NNM1 and the first linear regression model LRM1. In some embodiments, the processor 130 may update parameters of the first neural network model NNM1 and the first linear regression model LRM1 through the backpropagation algorithm according to the training sample and the training label.
In the following paragraphs, practical examples are given to describe the updating process of the first neural network model NNM1 and the first linear regression model LRM1. Refer also to
Finally, a loss (e.g., a mean-squared error (MSE)) may be calculated according to the first physical deformation parameter y and the output result ŷ1, and the loss value may be used in the backpropagation algorithm, so as to update parameters of the first neural network model NNM1 and the first linear regression model LRM1. It is worth mentioning that to simplify the description, the first top view silhouette image BW1 and the first physical deformation parameter y of a first fabric are taken as an example therein. However, in practical implementations, a large number of top view silhouette images and physical deformation parameters of a large number of fabrics are used to train the first neural network model NNM1 and the first linear regression model LRM1 using the same method.
In some embodiments, the processor 130 may take the first side view silhouette image and the corresponding first physical deformation parameter as a training sample and a training label, respectively, and then use the training sample and the training label to update parameters of the second neural network model NNM2 and the second linear regression model LRM2. In some embodiments, the processor 130 may update parameters of the second neural network model NNM2 and the second linear regression model LRM2 through the backpropagation algorithm according to the training sample and the training label.
In the following paragraphs, practical examples are given to describe the updating process of the second neural network model NNM2 and the second linear regression model LRM2. Refer also to
Finally, a loss (e.g., a mean-squared error (MSE)) may be calculated according to the first physical deformation parameter y and the output result ŷ2, and the loss value may be used in the backpropagation algorithm, so as to update parameters of the neural network model NNM2 and the second linear regression model LRM2. It is worth mentioning that to simplify the description, the first side view silhouette image BW2 and the first physical deformation parameter y of a first fabric are taken as an example therein. However, in practical implementations, a large number of side view silhouette images and physical deformation parameters of a large number of fabrics are used to train the first neural network model NNM1 and the first linear regression model LRM1 using the same method.
Moreover, as shown in
Moreover, in Step S240, the first output vector and the second output vector are concatenated to generate a concatenated vector, and the third linear regression model is updated according to the concatenated vector and the corresponding first physical deformation parameter. In some embodiments, the third linear regression model LRM3 may be concatenated after the first neural network model NNM1 and the second neural network model NNM2.
In some embodiments, the processor 130 may concatenate the first output vector and the second output vector sequentially to generate the concatenated vector. In some embodiments, the processor 130 may take the concatenated vector and the corresponding first physical deformation parameter as a training sample and a training label, respectively, and then use the training sample and the training label to update parameters of the third linear regression model LRM3. In some embodiments, the processor 130 may update parameters of the third linear regression model LRM3 through the backpropagation algorithm according to the training sample and the training label.
In the following paragraphs, practical examples are given to describe the updating process of the third linear regression model LRM3. Refer also to
Finally, the concatenated vector CV may be inputted to the third linear regression model LRM3 to generate an output result ŷ3, and a loss may be calculated according to the first physical deformation parameter y and the output result ŷ3, and the loss value may be used in the backpropagation algorithm, so as to update parameters of the third linear regression model LRM3. It is worth mentioning that to simplify the description, the first top view silhouette image BW1, the first side view silhouette image BW2, and the first physical deformation parameter y of a first fabric are taken as an example therein. However, in practical implementations, a large number of top view silhouette images, side view silhouette images, and physical deformation parameters of a large number of fabrics are used to train the third linear regression model LRM3.
In some embodiments, the processor 130 may generate another concatenated vector according to a fabric specification information corresponding to the first fabric, and then concatenate another concatenated vector with the concatenated vector to generate a new concatenated vector. Then, the processor 130 may use the new concatenated vector and the first physical deformation parameter to further update parameters of the third linear regression model LRM3. In some embodiments, the fabric specification information comprises a fabric category data, a fabric combination data, and a fabric weight data.
In some embodiments, the fabric category data comprises a weaving category, an elastic category, and a fabric type. For example, the weaving category may be one of various weaving specifications such as warp knit, weft knit, plain weave, or twill weave. The elastic category may be one of various elastic specifications such as non-elastic, 4-way elastic, warp elastic, weft elastic. The fabric category may be a fabric specification such as dobby, single jersey fleece/plush, tricot fleece/tricot brush or lamination. In other words, each of the weaving category, the elastic category and the fabric category is a string.
In some embodiments, the fabric combination data may include a composition combination and a textile-finishing combination. For example, the composition combination may be a fiber composition specification such as 60% nylon fiber and 40% polyester fiber, 92% polyester fiber and 8% spandex fiber, 50% cationic (cd) polyester fiber and 50% polyester fiber or 77% nylon fiber and 23% polyurethane (pu). The textile-finishing may be unfinished, brush with embossing or brush. In other words, each of the composition combination and the textile-finishing combination is a string including a plurality of strings, respectively.
In some embodiments, the fabric weight data may include a fabric weight and a specific gravity, where the unit of the fabric weight is GSM, and the fabric weight is a ratio between two densities. In other words, each of the fabric weight and the specific gravity is a numerical data, respectively.
In some embodiments, the processor 130 may transform the fabric category data into a first vector through a target-encoding algorithm and normalization, transform the fabric combination data into a second vector through a bidirectional encoder representations from transformers (BERT) algorithm and a principal component analysis (PCA) algorithm, and transform the fabric into a third vector through normalization. Then, the processor 130 may concatenate the first vector, the second vector, and the third vector to generate another concatenated vector.
It is worth mentioning that although only one physical deformation parameter is taken as an example, in practical implementations, different first neural network models NNM1, second neural network models NNM2, first linear regression models LRM1, second linear regression models LRM2, and third linear regression models LRM3 may be trained for different physical deformation parameters.
The training stage of the first neural network model NNM1, the second neural network model NNM2, the first linear regression model LRM1, the second linear regression model LRM2, and the third linear regression model LRM3 may be completed following the aforementioned steps. By this, the well-trained first neural network model NNM1, second neural network model NNM2, and third linear regression model LRM3 may be utilized to conduct the measurement of the physical deformation parameter.
In the following paragraphs, the executing stage of the first neural network model NNM1, the second neural network model NNM2, and the third linear regression model LRM3 will be described.
In some embodiments, the image capturing circuit 110 may capture a second top view image and a second side view image corresponding to a second fabric (the new fabric is hereinafter referred to as the second fabric) again (i.e., to shoot images of a new fabric), and the processor 130 may perform a matting algorithm on the second top view image and the second side view image respectively, so as to generate a second top view silhouette image and a second side view silhouette image.
In some embodiments, processor 130 may transform the second top view silhouette image and the second side view silhouette image corresponding to the second fabric into a second physical deformation parameter (i.e., the measured physical deformation parameter) through the first neural network model NNM1, the second neural network model NNM2, and the third linear regression model LRM3.
In the following paragraphs, practical examples will be given to describe the executing stage of the first neural network model NNM1, the second neural network model NNM2, and the third linear regression model LRM3.
Refer also to
PIC4, respectively, so as to generate the second top view silhouette image BW3 and the second side view silhouette image BW4. Then, the second top view silhouette image BW3 and the second side view silhouette image BW4 may be inputted to the well-trained first neural network model NNM1 and second neural network model NNM2, respectively. By this, a second physical deformation parameter PDP′ may be outputted from the well-trained third linear regression model LRM3, where the second physical deformation parameter PDP′ may be a bending work in the warp direction BWR′, the bending work in the weft direction BWF′, a stretching rate in the warp direction SWR′, a a stretching rate in the weft direction SWF′, or a stretching rate in the oblique direction SO′, etc.
By this, as the well-trained first neural network model NNM1, second neural network model NNM2, and third linear regression model LRM3 are used, the second physical deformation parameter PDP′ corresponding to the second fabric may be easily measured through the second top view silhouette image BW3 and the second side view silhouette image BW4. Therefore, as the well-trained first neural network model NNM1, second neural network model NNM2, and third linear regression model LRM3 are used, the complicated measurement procedure may be omitted, and the cost of measurement may be reduced. As a result, since the physical deformation parameter may be easily obtained, the procedure of using a simulation display SD in
In summary, the fabric image processing device according to the present disclosure utilizes a combination of two neural network models and three linear regression models to construct a model for prediction of physical deformation parameter through a training procedure using silhouette image of fabrics. By this, as new silhouette images corresponding to a new fabric are inputted to the two well-trained neural network models and one of the linear regression models, a new physical deformation parameter may be generated to perform simulations. As a result, various drapes and motions of a virtual fabric may be simulated in a three-dimensional virtual space without measuring various deformations of the new fabric (which is more difficult to obtain), which may reduce the time and manpower consumed by additional measurement of physical deformation parameters, and thereby improve the efficiency of fabric simulation in the three-dimensional virtual space.
Although the present disclosure has been described in considerable detail with reference to certain embodiments thereof, other embodiments are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the embodiments contained herein. It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present disclosure without departing from the scope or spirit of the present disclosure. In view of the foregoing, it is intended that the present disclosure cover modifications and variations of this disclosure provided they fall within the scope of the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
112118822 | May 2023 | TW | national |