This application claims the benefit of Taiwan application Serial No. 112133672, filed Sep. 5, 2023, the subject matter of which is incorporated herein by reference.
The invention relates in general to a processing method, an electronic device and an electronic system, and more particularly to an imaging method, an electronic device and a video conference system.
Along with the advance of video technology, people can perform remote video communication and discussion using video conference software. During the video conference, by activating the lens, the speaker can send his/her image to each participant.
During the video conference, the speaker can show the participants a particular object, the indoor environment, or a beautiful scenery through the lens. For the participants to view the complete appearance of the object, the speaker can rotate the object or shoot around the object. Although the speaker has tried all his/her best to shoot every detail of the object, the participant may still want to carefully check the object from a particular angle but is unable to keep asking the speaker to show the object again from the desired angle. Therefore, it has become a prominent task for the industry to provide a technology allowing the participant to actively adjust the angle of the object without having to ask the speaker to show the object again.
The invention is directed to an imaging method, an electronic device and a video conference system. During a video conference, the video conference stream can be displayed on a display. Furthermore, a 3D model can be built at the local end based on the 2D frames displayed on the display, and an adjusted image corresponding to a particular viewing angle information can be rendered according to the 3D model. Regardless of the rotation angle of the speaker with respect to an object, each participant can independently rotate the object within the imaging window W2 as if the participant were on site holding the object at hand.
According to one embodiment of the present invention, an imaging method is provided. The imaging method is used for performing video conference on a display. The imaging method includes the following steps. At least one video conference stream is received. When a 3D modeling command is received, a plurality of 2D frames shown on the display are captured. Based on the 2D frames, at least one 3D model is built. When a rending command corresponding to a viewing angle information is received, an adjusted image corresponding to the viewing angle information is rendered according to at least one 3D model.
According to another embodiment of the present invention, an electronic device is provided. The electronic device includes a transmission module, a display and a processing module. The transmission module is used for receiving at least one video conference stream. The display is used for displaying the video conference stream. The processing module includes a frame capturing unit, a modeling unit and an imaging unit. When a 3D modeling command is received, the frame capturing unit captures a plurality of 2D frames shown on the display. The modeling unit is used for building at least one 3D model based on the 2D frames. When a rending command corresponding to a viewing angle information is received, the imaging unit displays an adjusted image corresponding to the viewing angle information according to the 3D model.
According to an alternate embodiment of the present invention, a video conference system is provided. The video conference system includes a first electronic device and a second electronic device. The first electronic device is used for capturing at least one video conference stream and uploading it to a network. The second electronic device includes a transmission module, a display and a processing module. The transmission module is used for receiving a video conference stream through the network.
The display is used for displaying the video conference stream. The processing module includes a frame capturing unit, a modeling unit and an imaging unit. When a 3D modeling command is received, the frame capturing unit captures a plurality of 2D frames shown on the display. The modeling unit is used for continuously building at least one 3D model based on the 2D frames. When a rending command corresponding to a viewing angle information is received, the imaging unit displays an adjusted image corresponding to the viewing angle information according to the 3D model.
The above and other aspects of the invention will become better understood with regard to the following detailed description of the preferred but non-limiting embodiment(s). The following description is made with reference to the accompanying drawings.
Referring to
The video conference stream VS is instantly transferred from the speaker's electronic device 100 to the electronic device 200n. The video conference window W1 shown on the display 220 of the participant's electronic device 200n will continuously display the video conference stream VS.
In the present embodiment, the participant's electronic device 200ncan display an adjusted image IMtk of the object 800 on the imaging window W2. The imaging window W2 can be superimposed on the video conference window W1. Regardless of the rotation angle of the speaker with respect to an object 800, each participant can independently rotate the object 800 within the imaging window W2 using a mouse or a keyboard as if the participant were on site holding the object 800 at hand.
What is displayed on the video conference window W1 is the synchronized video conference stream VS, and what is displayed on the imaging window W2 is the part or position of the object 800 which the participant is interested in. The participant can choose the part or position of the object 800 by adjusting the rotation angle of the rotation angle of the object 800.
Referring to
Referring to
Then, the method proceeds to step S120, whether a 3D modeling command CM1 is received is determined by the frame capturing unit 231 of the processing module 230. The 3D modeling command CM1 can be triggered by an input device 300. The input device 300 can be realized by such as a mouse, a keyboard or a microphone. For instance, a participant can use a mouse to move the cursor to a model building button on the frame, then click the model building button to trigger a 3D modeling command CM1. Or, the participant can press a quick key combination on the keyboard to trigger the 3D modeling command CM1. Or, the participant can input a particular segment of voice through the microphone to trigger the 3D modeling command CM1. If it is determined that the 3D modeling command CM1 is received, the method proceeds to step S130.
In the present embodiment, even when the method has proceeded to step S130, step S110 continues to be performed.
In step S130, a plurality of 2D frames FMi of the display 220 are captured by the frame capturing unit 231. The 2D frames FMi can be the entire frame of the display 220, the frame of a video conference window W1, or the frame of a video sub-window within the video conference window W1. The frame capturing unit 231 captures the 2D frames FMi at a specific capture frequency. The capture frequency does not have to be identical to the display frequency of the video conference stream.
Then, the method proceeds to step S140. In the present embodiment, even when the method has proceeded to step S140, step S110 and step S130 continue to be performed.
In step S140, at least one 3D model MDt is built by the modeling unit 232 of the electronic device 200n based on the 2D frames FMi. Step S140 includes step S141 and step S142. In step S141, a plurality of feature points FPij are captured from the 2D frames FMi by the modeling unit 232 of the electronic device 200n.
Then, in step S142, the feature points FPij are superimposed by the modeling unit 232 of the electronic device 200n to link the 2D frames FMi. After the 2D frames FMi are linked, the 3D model MDt can be built using triangular mesh technology.
When step S140 is being performed, step S110 and step S130 continue to be performed, therefore the electronic device 200n will accumulate more 2D frames FMi. Generally speaking, the large the quantity of 2D frames FMi, the higher the delicacy of the 3D model MDt.
In the present step, the modeling unit 232 of the electronic device 200n will build a relative angle relationship RS in the 3D model MDt. The relative angle relationship RS defines the angles of pitch, yaw and rolling.
Besides, the modeling unit 232 of the electronic device 200n can build a light source direction LD. The light source direction LD is a particular direction in the 3D space.
Then, the method proceeds to step S150, whether a rending command CM2 corresponding to a viewing angle information Ak is received is determined by the imaging unit 233 of the electronic device 200n. The rending command CM2 can be provided by an input device 300. For instance, the input device 300 can be realized by such as a mouse, a keyboard or a microphone. For instance, the participant can use the mouse to click the object 800, then rotate the object 800 to generate a rending command CM2 corresponding to a particular viewing angle information Ak. Or, the participant can press the directional key on the keyboard to generate a rending command CM2 corresponding to a particular viewing angle information Ak. Or, the participant can input a particular segment of voice to generate a rending command CM2 corresponding to a particular viewing angle information Ak through the microphone. If it is determined that the rending command CM2 is received, the method proceeds to step S160.
Then, the method proceeds to step S160. In the present embodiment, even when the method has proceeded to step S160, step S110, step S130 and step S140 continue to be performed.
In step S160, an adjusted image IMtk corresponding to the viewing angle information Ak is rendered according to the 3D model MDt of the electronic device 200n then is displayed on the imaging window W2 by the imaging unit 233. In the present step, the imaging unit 233 of the electronic device 200n can show light and shadow on the adjusted image IMtk according to the light source direction LD.
Referring to
In the present embodiment, the step S130 of capturing the 2D frames FMi, the step S140 of building the 3D model MDt and the step S160 of rendering the adjusted image IMtk are performed concurrently. Therefore, although the rending command CM2 has been already received by the participant's electronic device 200n for long, the adjusted image IMtk still can be smoothly rendered with a certain degree of clarity.
Referring to
Referring to
As indicated in
As indicated in
Refer to the examples indicated in
According to the above embodiments, during a video conference, the video conference stream VS can be displayed on the display 220. Furthermore, a 3D model MDt can be built at the local end based on the 2D frames displayed on the display 220, and an adjusted image IMtk corresponding to a particular viewing angle information Ak can be rendered according to the 3D model MDt. Regardless of the rotation angle of the speaker with respect to an object 800, each participant can independently rotate the object 800 within the imaging window W2 as if the participant were on site holding the object 800 at hand.
While the invention has been described by way of example and in terms of the preferred embodiment(s), it is to be understood that the invention is not limited thereto. Based on the technical feature points embodiments of the present invention, a person ordinarily skilled in the art will be able to make various modifications and similar arrangements and procedures without breaching the spirit and scope of protection of the invention. Therefore, the scope of protection of the present invention should be accorded with what is defined in the appended claims.
| Number | Date | Country | Kind |
|---|---|---|---|
| 112133672 | Sep 2023 | TW | national |