The present invention relates to an image processing system, in particular to an image processing system for converting two-dimensional (2D) images into three-dimensional (3D) model. The present invention further relates to the method of the image processing system.
Metaverse is an on-line 3D virtual environment based on decentralization. A user can enter an artificial virtual world via a virtual reality (VR) headset, augmented reality (AR) headset or an electronic device (e.g., a smart phone, a tablet computer, a personal computer, etc.). Many product providers need to manufacture a large amount of 3D models of various products to achieve the desired AR effects in order to conform to the development trend of metaverse in the future. However, it may take several engineers a few hours to manufacture the 3D model of a product, which significantly increases the time cost and the manufacturing cost. Accordingly, currently available technologies cannot satisfy the actual requirements.
One embodiment of the present invention provides an image processing system for converting two-dimensional (2D) images into three-dimensional (3D) model, which includes a selecting model, an asset importing module, a converting module and a model generating module. The selecting model receives a category selecting instruction to select an object category and receives a template selecting instruction to select an object template corresponding to the object category. The asset importing module receives a plurality of 2D views of a target object in different view angles. The converting module projects the 2D views to the object template to generate a projected image. The model generating module amends the projected image to generate a 3D model.
In one embodiment, the model generating module performs one or more of compressing, trimming and dislocation adjustment to process the projected image in order to generate the 3D model.
In one embodiment, the image processing system further includes a preview module and an image capturing model. The image capturing module captures the image of a user. The preview module combines the image of the user with the 3D model to generate a preview image and a network link for accessing the 3D model.
In one embodiment, the packet module converts the 3D model into a packet.
In one embodiment, the image processing system further includes a cloud storage module. The cloud storage module receives the packet transmitted from the packet module and saves the packet.
Another embodiment of the present invention provides an image processing method for converting 2D images into 3D model, which includes the following steps: receiving a category selecting instruction to select an object category and receiving a template selecting instruction to select an object template corresponding to the object category; receiving a plurality of 2D views of a target object in different view angles; projecting the 2D views to the object template to generate a projected image; and amending the projected image to generate a 3D model.
In one embodiment, the step of amending the projected image to generate the 3D model further includes the following step: performing one or more of compressing, trimming and dislocation adjustment to process the projected image in order to generate the 3D model.
In one embodiment, the image processing method further includes the following steps: capturing the image of a user; combining the image of the user with the 3D model to generate a preview image; and generating a network link for accessing the 3D model.
In one embodiment, the image processing method further includes the following steps: converting the 3D model into a packet; and generating a network link for accessing the 3D model.
In one embodiment, the image processing method further includes the following step: transmitting the packet to a cloud storage module and saving the packet in the cloud storage module.
The image processing system for converting 2D images into 3D model and method thereof in accordance with the embodiments of the present invention may have the following advantages:
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
The present invention will become more fully understood from the detailed description given herein below and the accompanying drawings which are given by way of illustration only, and thus are not limitative of the present invention and wherein:
In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosed embodiments. It will be apparent, however, that one or more embodiments may be practiced without these specific details. In other instances, well-known structures and devices are schematically shown in order to simplify the drawing. It should be understood that, when it is described that an element is “coupled” or “connected” to another element, the element may be “directly coupled” or “directly connected” to the other element or “coupled” or “connected” to the other element through a third element. In contrast, it should be understood that, when it is described that an element is “directly coupled” or “directly connected” to another element, there are no intervening elements.
Please refer to
The selecting module 11 can receive a category selecting instruction Cs to select an object category and receive a template selecting instruction Ts to select an object template TM corresponding to the above object category. The user can transmit the category selecting instruction Cs and the template selecting instruction Ts via his/her electronic device (e.g., a smart phone, a tablet computer, a VR headset, an AR headset, etc.) so as to select the object category (e.g., glasses, earrings, nose ring, makeup, hairstyle, cosmetic lens, various accessories, etc.) and the object template TM corresponding thereto. For instance, if the object category is glasses, the object template TM corresponding thereto may be the sunglasses template, the reading glasses template, the safety googles template, etc. In this embodiment, the object category selected by the user is glasses and the object template TM corresponding thereto is the sunglasses template, as shown in
As shown in
As shown in
As shown in
The model generating module 15 is connected to the managing module 14. If the characteristics of the projected image PM conform to the predetermined format, the system administrator can transmit the projected image PM to the model generating module 15 via the managing module 14. Afterward, the model generating module 15 can amend the projected image PM (e.g., compressing, trimming, dislocation adjustment, etc.) so as to further optimize the projected image PM and convert the projected image PM into a 3D model.
The preview module 16 is connected to the model generating module 15 and the image capturing module 17 is connected to the preview module 16. The image capturing module 17 can capture the image of the user and the user can upload his/her image to the image capturing module 17 via his/her electronic device. In another embodiment, the image capturing module 17 (e.g., an analog camera, a digital camera, etc.) can directly capture the image of the user. Then, the preview module 16 can combine the image of the user with the 3D model in order to generate a preview image. Next, the user can determine whether the visual effects of the 3D model conform to his/her requirements. If the visual effects of the 3D model cannot satisfy the user's requirements, the user can direct discard the projected image PM. Afterward, the user can re-import the 2D views into the asset importing module 12 and then repeat the above steps.
The packet module 18 is connected to the preview module 16 and the cloud storage module 19 is connected to the packet module 18. If the visual effects of the 3D model conform to the user's requirements, the user can save the 3D model in the packet module 18. Afterward, the packet module 18 can convert the 3D model into a packet and transmit the packet to the cloud storage module 19 with an aim of saving the packet in the cloud storage module 19. Meanwhile, the preview module 16 can generate a network link for accessing the 3D module. The user can transmit the network link to the electronic device (e.g., a smart phone, a tablet computer, a personal computer, etc.) of a potential customer. In this way, the customer can click the network link via his/her electronic device to trigger the network link (the customer can also trigger the network link via other similar means). After the network link has been triggered, the electronic device of the customer can obtain the 3D model from the cloud storage module 19 and capture the image of the customer via the camera thereof. Next, the electronic device of the customer can combine the image of the customer with the 3D model, such that the customer can watch the preview image of wearing the 3D model (e.g., the 3D model of a pair of sunglasses). The above steps can be directly finished by the browser of the customer's electronic device without any additional software. Via the above mechanism, the user can manufacture the 3D models of a large amount of different target objects in a short time and save which in the cloud storage module 19. Accordingly, the user can access the 3D models via the cloud storage module 19 at any time, which can be more convenient in use.
Via the above mechanism, the image processing system 1 can manufacture the 3D models of a large number of different target objects in a short time. Thus, the image processing system 1 not only can obtain the 3D models having great visual effects, but also can greatly reduce the time cost and the manufacturing cost. Thus, the image processing system 1 can definitely meet actual requirements.
Moreover, the image processing system 1 can perform one or more of compressing, trimming and dislocation adjustment (adjusting the positions of the front view FV, the left view LV and the right view RV to make the combination of which match the object template TM) in order to further optimize the projected image PM. Thus, the image processing system 1 can manufacture the 3D models having excellent visual effects. Further, the image processing system 1 can also provide the preview function to combine the image of the user with a 3D model to generate a preview image, so the user can determine whether the visual effects of the 3D model combined with the image of the user conform to his/her requirements according to the preview image in order to take the necessary measures in time.
The embodiment just exemplifies the present invention and is not intended to limit the scope of the present invention; any equivalent modification and variation according to the spirit of the present invention is to be also included within the scope of the following claims and their equivalents.
It is worthy to point out that it may take several engineers a few hours to manufacture the 3D model of a product, which significantly increases the time cost and the manufacturing cost. Accordingly, currently available technologies cannot satisfy the actual requirements. On the contrary, according to one embodiment of the present invention. the image processing system includes the asset importing module, the converting module and the model generating module. The asset importing module can receive the 2D views of the target object in different view angles and the converting module can project the 2D views to the object template to generate the projected image. The model generating module can amend the projected image to generate the 3D model. Via the above image processing mechanism, the image processing system can swiftly generate the 3D models of a large number of different target objects, which can greatly reduce the time cost and the manufacturing cost. Thus, the image processing system can definitely satisfy actual requirements.
Also, according to one embodiment of the present invention, the image processing system has the model generating module, which can perform compressing, trimming and dislocation adjustment so as to further optimize the projected image. Therefore, the image processing system can generate the 3D model having the desired visual effects.
Further, according to one embodiment of the present invention, the image processing system has the preview module and the image capturing module. The image capturing module can capture the image of the user and the preview module can combine the image of the user with the 3D model to generate the preview image. In this way, the user can determine whether the visual effects of the combination of the 3D model and the image of the user fit in with his/her requirements according to the preview image.
Moreover, according to one embodiment of the present invention, the image processing system includes the packet module and the cloud storage module. The packet module can convert the optimized 3D model into the packet and save the packet in the cloud storage module. Therefore, the user can conveniently access the optimized 3D model via the cloud storage module at any time, so the system can be more convenient in use and more flexible in application.
Furthermore, according to one embodiment of the present invention, the image processing system can generate the optimized 3D models for a large amount of target objects in a short time via the specially-designed image processing mechanism. In this way, the system can effectively solve the problems of the currently available technologies. Therefore, the system can achieve high commercial value. As set forth above, the image processing system according to the embodiments of the present invention can definitely achieve great technical effects.
Please refer to
Step S41: receiving a category selecting instruction to select an object category and receiving a template selecting instruction to select an object template corresponding to the object category.
Step S42: receiving a plurality of 2D views of a target object in different view angles.
Step S43: projecting the 2D views to the object template to generate a projected image.
Step S44: performing one or more of compressing, trimming and dislocation adjustment to process the projected image in order to generate the 3D model.
Step S45: combining the image of the user with the 3D model to generate a preview image and a network link for accessing the 3D model.
Step S46: transmitting the packet to a cloud storage module and saving the packet in the cloud storage module.
The embodiment just exemplifies the present invention and is not intended to limit the scope of the present invention; any equivalent modification and variation according to the spirit of the present invention is to be also included within the scope of the following claims and their equivalents.
Please refer to
Step S51: selecting the object category of a target object; then, the process proceeds to Step S52.
Step S52: selecting the object template corresponding to the object category of the target object; then, the process proceeds to Step S53.
Step S53: importing the 2D views of the target objects in different view angles and projecting the 2D views to the object template to generate a projected image; then, the process proceeds to Step S54.
Step S54: confirming whether the characteristics of the projected image conform to the predetermined format? If they do, the process proceeds to Step S55; if they do not, the process returns to Step S53.
Step S55: amending the projected image and converting the projected image into a 3D model; then, the process proceeds to Step S56.
Step S56: combining the image of a user with the 3D model to generate a preview image and determining whether the visual effects of the 3D model conform to the requirements of the user? If they do, the process proceeds to Step S57; if they do not, the process returns to Step S53.
Step S57: generating a network link for accessing the 3D model and converting the 3D model into a packet; then, the process proceeds to Step S58.
Step S58: transmitting the packet to a cloud storage module and saving the packet in the cloud storage module.
The embodiment just exemplifies the present invention and is not intended to limit the scope of the present invention; any equivalent modification and variation according to the spirit of the present invention is to be also included within the scope of the following claims and their equivalents.
Although the operations of the method(s) herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operations may be performed, at least in part, concurrently with other operations. In another embodiment, instructions or sub-operations of distinct operations may be implemented in an intermittent and/or alternating manner.
To sum up, according to one embodiment of the present invention, the image processing system includes the asset importing module, the converting module and the model generating module. The asset importing module can receive the 2D views of the target object in different view angles and the converting module can project the 2D views to the object template to generate the projected image. The model generating module can amend the projected image to generate the 3D model. Via the above image processing mechanism, the image processing system can swiftly generate the 3D models of a large number of different target objects, which can greatly reduce the time cost and the manufacturing cost. Thus, the image processing system can definitely satisfy actual requirements.
Also, according to one embodiment of the present invention, the image processing system has the model generating module, which can perform compressing, trimming and dislocation adjustment so as to further optimize the projected image. Therefore, the image processing system can generate the 3D model having the desired visual effects.
Further, according to one embodiment of the present invention, the image processing system has the preview module and the image capturing module. The image capturing module can capture the image of the user and the preview module can combine the image of the user with the 3D model to generate the preview image. In this way, the user can determine whether the visual effects of the combination of the 3D model and the image of the user fit in with his/her requirements according to the preview image.
Moreover, according to one embodiment of the present invention, the image processing system includes the packet module and the cloud storage module. The packet module can convert the optimized 3D model into the packet and save the packet in the cloud storage module. Therefore, the user can conveniently access the optimized 3D model via the cloud storage module at any time, so the system can be more convenient in use and more flexible in application.
It should also be noted that at least some of the operations for the methods described herein may be implemented using software instructions stored on a computer useable storage medium for execution by a computer (or a processor). As an example, an embodiment of a computer program product includes a computer useable storage medium to store a computer readable program.
The computer useable or computer readable storage medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device). Examples of non-transitory computer useable and computer readable storage media include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and an optical disk. Current examples of optical disks include a compact disk with read only memory (CD-ROM), a compact disk with read/write (CD-R/W), and a digital video disk (DVD).
Alternatively, embodiments of the invention (or each module of the system) may be implemented entirely in hardware, entirely in software or in an implementation containing both hardware and software elements. In embodiments which use software, the software may include, but not limited to, firmware, resident software, microcode, etc. In embodiments which use hardware, the hardware may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), central-processing unit (CPU), controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof.
Furthermore, according to one embodiment of the present invention, the image processing system can generate the optimized 3D models for a large amount of target objects in a short time via the specially-designed image processing mechanism. In this way, the system can effectively solve the problems of the currently available technologies. Therefore, the system can achieve high commercial value.
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
111113603 | Apr 2022 | TW | national |