This application claims the benefit of Taiwan application Serial No. 100144714, filed Dec. 5, 2011, the subject matter of which is incorporated herein by reference.
1. Technical Field
The invention relates in general to a method and a system for establishing a 3D object.
2. Background
Augmented Reality (AR) technique calculates spatial information, including positions and orientations, of images captured by cameras in real time, and adds corresponding digital contents to the images according to the spatial information. The technique aims to make a virtual object overlay a real object on the display for entertainment interactions or information display. However, the real object in the conventional augmented reality applications is usually limited to augment the virtual object in a plane graphic card. In general, the augmented virtual object can not be normally displayed if the patterns for system identification are shaded and the system can not trace the plane graphic card. Moreover, it would destroy the immersive of the augmented reality application, and it is hard to spread to augmented applications of the actual 3D object.
The system generally needs to obtain spatial information of the object so as to augment the required virtual interaction contents on the object to implement the augmented reality applications of the 3D object consequently. Existed visual arts build a model of the actual object and fits information of the model into the system, so that the system is able to trace the space posture of the actual object any time to achieve augmented reality applications of the 3D objects. However, the conventional method for establishing a 3D object model needs expensive equipments or complicated and accurate procedures. It does not match general users' requirements, and it is hard to spread to general application fields, such as consumer electronic products.
The disclosure is directed to a method and a system for establishing a 3D object, establishing mutual spatial relationships based on postures of multiple featured patches with different textured features, and accordingly tracing and describing an object.
According to a first aspect of the present disclosure, a method for establishing a 3D object is provided. The method includes the following steps. Multiple featured patches, with different textured features, on the surface of an object are captured and stored. An image capture unit is utilized to detect the featured patches on the surface of the object. A processing unit is utilized to build a spatial relationship matrix corresponding to the featured patches according to detected space information of the featured patches. The processing unit is utilized to trace and describe the object according to the spatial relationship matrix.
According to a second aspect of the present disclosure, a system for establishing a 3D object is provided. The system includes an image capture unit and a processing unit. The image capture unit captures and stores multiple featured patches, with different textured features, on a surface of an object. The processing unit builds a spatial relationship matrix corresponding to the featured patches according to detected space information of the featured patches after the image capture unit detects the featured patches, and traces and describes the object according to the spatial relationship matrix.
The invention will become apparent from the following detailed description of the preferred but non-limiting embodiments. The following description is made with reference to the accompanying drawings.
The disclosure proposes a method and a system for establishing a 3D object, establishing mutual spatial relationships based on postures of multiple featured patches with different textured features, and accordingly tracing and describing an object.
Referring to
Now referring concurrently to
In
In another embodiment, an image analysis capture zone Rc is shown on the display unit 130. When the object 140 is manually rotated or automatically rotated on a support platform, the image analysis capture zone Rc will be fully located in the plane or near-plane to-be-analyzed zone R1 with the textured feature. When the image analysis capture zone Rc is fully located in the to-be-analyzed zone R1, the processing unit 120 detects numbers of feature points in the image analysis capture zone Rc.
When the number of the feature points exceeds a threshold, the processing unit 120 determines that the to-be-analyzed zone, with the sufficient feature points, corresponding to the image analysis capture zone Rc is the featured patch. In a preferred embodiment, the range of the image analysis capture zone Rc is slightly less than the range of the to-be-analyzed zone R1. In
In step S210, the object 140 is hand-held or placed on a support platform, so that the image capture unit 110 detects the featured patches on the surface of the object 140 to display on the display unit 130. When any two of the neighboring featured patches (Pi, Pj) are shown on the display unit 130, the processing unit 120 can lock the two featured patches (Pi, Pj) through identification of the textured features, i and j being integers ranging form 1 to 6.
In step S220, the processing unit 120 estimates spatial information (Qi, Qj) of the two featured patches (Pi, Pj). The spatial information includes postures, positions or scales of the two featured patches (Pi, Pj) in the space for example. The spatial relationship between the two featured patches is shown as equation (1).
QiiSj=Qj (1)
iSj in equation (1) is the spatial relationship with Qi transforming into Qj. Q represents an augmented transform matrix of the spatial information of the featured patch and consists of a rotation matrix R and a translation vector t representing 3D positions. Q is shown as equation (2).
Q
i
=[R|t] (2)
In step S230, the processing unit 120 calculates the spatial relationships of the consecutive neighboring featured patches according to the space information of the featured patches to obtain a neighboring patch relationship matrix Q1. The spatial relationship includes relative rotations and transitions between the two featured patches. The neighboring patch relationship matrix Q1 can only express a single stranded spatial relationship. That is, any one of the featured patches build the spatial relationships only with its neighboring featured patches. If any one of the featured patches cannot be detected by the system 100 for establishing a 3D object, it cannot be proved that the neighboring featured patches of the un-detected featured patch can be estimated by the system 100 for establishing a 3D object. The neighboring patch relationship matrix φ1 is shown as equation (3).
In step S240, the processing unit 120 calculates the spatial relationships of any two of the non-neighboring featured patches based on the neighboring patch relationship matrix Ω1 to obtain the spatial relationship matrix Ω2, shown as equation (4). jSj and jSi are inverse matrices with each other. Thus the spatial relationship matrix Ω2 is solely to be an upper triangular matrix or a lower triangular matrix to represent the mutual spatial relationships between all the featured patches.
While obtaining Ω2 from Ω1, it can spread the spatial relationships between the neighboring patches to those between the non-neighboring patches by the following equation, iSk=iSjjSk. iSj and jSk respectively represent the spatial relationships of two set of the neighboring patches. That is, the featured patch Pi is neighboring to the featured patch Pj, and the featured patch Pj is neighboring to the featured patch Pk. The spatial relationship iSk between the non-neighboring textured patches Pi and Pk can be obtained via the textured patch Pj. In addition, iSi represents the spatial relationship between the featured patch Pi and itself; that is, there is no rotation or transition, and thus iSi is simplified into the identity matrix I. Ω2 can be obtained from Ω1 by following the above steps. Consequently, it ensures that the spatial information of any one of the featured patches can be estimated by the at least one viewable feature patch any time.
The said steps S230 and S240 mainly utilize the processing unit 120 to build the spatial relationship matrix Ω2 corresponding to the featured patches according to the space information of the detected featured patches.
When the spatial relationship matrix Ω2 is built, in step S250, the processing unit 120 is able to trace and describe each feature patch of the object 140 according to the spatial relationship matrix Ω2. In step S250, the processing unit 120 substantially obtains mutual spatial relationships between the featured patches on the surface of the object 140 according to the spatial relationship matrix Ω2. The processing unit 120 can estimate the space information of the other featured patches, shaded and not shown on the display unit 130 or impossible to be stably identified and locked, from the space information of any one of the featured patches, shown on the display unit and identified, according to the spatial relationships.
According to said identification and lock, the processing unit 120 can substantially obtain the spatial position and direction of any one of the featured patches any time, and thus the processing unit 120 can make virtual augmented information overlay at least one of the featured patches of the object 140. For example, the processing unit 120 can make the virtual augmented information overlay the surfaces of the featured patches, or make the virtual augmented information move or rotate between the patches. Afterwards, the display unit 130 is utilized to display the object 140 and corresponding continuing augmented digital matters that overlays the object 140, without destroying the immersive of the augmented reality applications.
The method and the system for establishing a 3D object proposed in the embodiments of the disclosure detect the featured patches with different textures on the surface of an object, establish mutual spatial relationships based on postures of the specific featured patches with different textured features, and trace and describe the object according to the spatial relationships. Thus it builds a basis of the follow-up addition of augmented information of 3D augmented reality applications and vision interactions, and it is suitable to general users.
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments. It is intended that the specification and examples be considered as exemplary only, with a true scope of the disclosure being indicated by the following claims and their equivalents.
| Number | Date | Country | Kind |
|---|---|---|---|
| 100144714 | Dec 2011 | TW | national |