Embodiments of the present disclosure relate to a field of computer and network communication technology, and in particular, to a display method and apparatus based on augmented reality, a device, and a storage medium.
For a video software and platform, a video creation function is one of the core functions of the video software. The richness, diversity, and interest of the video creation function are important factors that attract users and video creators to use this video software.
At present, when users need to add a video special effect in the shooting environment during the process of using the video creation function, only fixed images or video materials provided by the platform can be chosen.
However, due to the limited number of images or video materials, the user is unable to fully set up special effects for the real scene image of the shooting environment according to the expected concept when creating a video, resulting in poor flexibility of the video creation and affecting the video expressiveness.
Embodiments of the present disclosure provide a display method and apparatus based on augmented reality, a device, and a storage medium to overcome the problems of the poor flexibility of the video creation and affecting the video expressiveness.
In a first aspect, an embodiment of the present disclosure provides a display method based on augmented reality, including:
In a second aspect, an embodiment of the present disclosure provides a display apparatus based on augmented reality, including:
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor and memory;
In a fourth aspect, an embodiment of the present disclosure provides a computer readable storage medium storing a computer-executable instruction, when the computer-executable instruction is executed by a processor, the display method based on augmented reality described in the first aspect above and various possible designs of the first aspect is implemented.
In a fifth aspect, an embodiment of the present disclosure provides a computer program product including a computer program, when the computer program is executed by a processor, the display method based on augmented reality described in the first aspect above and various possible designs of the first aspect is implemented.
In a sixth aspect, an embodiment of the present disclosure provides a computer program, when the computer program is executed by a processor, the display method based on augmented reality described in the first aspect above and various possible designs of the first aspect is implemented.
The embodiments provide a display method and apparatus based on augmented reality, a device, and a storage medium, the method includes receiving a first video; acquiring a video material by segmenting a target object from the first video; acquiring and displaying a real scene image, where the real scene image is acquired by an image collection apparatus; and displaying the video material at a target position of the real scene image in an augmented manner and playing the video material. Since the video material is acquired by receiving the first video and segmenting the target object from the first video, the video material may be set according to the needs of the user, so as to meet the purpose that the user customizes the loading and displaying of the video material, the customized video material is displayed on the real scene image in a manner of augmented reality, forming the video special effect that align with the user’s conception, enhancing the flexibility of the video creation, and improving the video expressiveness.
In order to illustrate the embodiments of the present disclosure or the technical solutions in the related art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the related art. Obviously, the accompanying drawings in the following description are some embodiments of the present disclosure, and for those of ordinary skill in the art, other accompanying drawings may also be obtained from these accompanying drawings without any creative work.
In order to make the purposes, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions in the embodiments of the present disclosure will be described clearly and completely below in combination with the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are part of the embodiments of the present disclosure, not all of them. Based on the embodiments in the present disclosure, all other embodiments obtained by an ordinary person skilled in the art without paying creative work all belong to the protection scope of the present disclosure.
Referring to
However, since the user can only use a fixed number of video materials preset in the video software in the related art, the video materials provided in the material library often cannot meet the usage needs of the user when shooting a video of a complex scene. Therefore, the video shot by the user using fixed materials cannot fully fulfill the user’s conception, which leading to a decrease of the expressiveness of the video. Embodiments of the present disclosure provide a display method based on augmented reality to solve the above problems.
Referring to
S101: receiving a first video.
Specifically, the first video may include a video containing a video material of interest to the user. For example, a dance performance video, a singing performance video, an animation video, etc. In one possible implementation, the video material may be a portrait video material. In another possible implementation, the video material may also be an object video material, for example, the first video may be a racing video, where the motional race car is the video material; additionally, in another possible implementation, the video material may also be a cartoon animation video material, an animal video material, etc., the specific implementation form of the first video will not be elaborated here.
Furthermore, in a possible implementation, the first video is a video shot by the user in advance and stored in the terminal device, which may also be uploaded directly by the user through a storage media or downloaded through a network and stored in the terminal device.
S102: acquiring a video material by segmenting a target object from the first video.
Specifically,
In a possible implementation, the target object in the first video may be a portrait, the terminal device may segment the portrait from the first video to acquire the corresponding portrait video material. Specifically, this process may include recognizing the feature of the portrait contour in the first video through a video portrait segmentation technology, determining the portrait contour, preserving the portion of the portrait in the first video based on the portrait contour, while removing the portion outside the portrait in the first video, so as to obtain the portrait video material. The specific implementation process of the video portrait segmentation may be implemented in various possible ways, which will not be elaborated here. In addition, the target object in the first video may include an object, etc., the terminal device may segment the target object from the first video to acquire the corresponding target object video material.
S103: acquiring and displaying a real scene image, where the real scene image is acquired by an image collection apparatus.
Exemplarily, the image collection apparatus may be, such as, a front-facing camera or rear camera disposed on the terminal device, or other cameras disposed outside the terminal device and communicating with the terminal device. The terminal device shoots the shooting environment by the camera, and may obtain the real scene image corresponding to the shooting environment in real-time, the real scene image is a real presentation of the shooting environment.
Furthermore, the terminal device may display the real scene image in real-time on the UI of the terminal device after acquiring the real scene image. The user may observe the real presentation of the shooting environment in real-time through the UI of the terminal device. The process of acquiring and displaying the real scene image is the preparation process before the user shoots the video, the user may perform the follow-up video shooting process by the image collection apparatus after determining the specific shooting position and shooting angle through observing the shooting environment.
S104: displaying the video material at a target position of the real scene image in an augmented manner and playing the video material.
Exemplarily, after determining the video material, the terminal device may display the video material at the target position of the real scene image, the target position may be set by the user and may be further adjusted according to the user instruction; it may also be a system default position. In a possible implementation, the displaying the video material at the target position of the real scene image including: firstly, displaying the video material in the default position, such as the geometric center of the real scene image displayed in the UI, and then, adjusting the target position according to the instruction inputted by the user, and displaying the video material in the corresponding target position based on the adjustment results, so as to achieve the user’s setting and adjustment for the display position of the video material.
Furthermore, the video material is displayed at the target position of the real scene image in a manner of augmented reality (AR). Specifically, for example, the video material is a portrait video material and the content is a dancing portrait, when the video material is displayed at the target position (such as next to the sofa in the real scene image) of the real scene image (such as a user’s living room image), the position relationship between the video material and the real scene image is fixed, that is, in the absence of new user instructions for changing the target position, the dancing portrait is always fixed next to the sofa; when the user moves or rotates the terminal device, the real scene image in the field of view displayed by the UI of the terminal device changes; however, the video material is fixedly displayed in a fixed position in the real scene image and does not move, that is, it is displayed in the real scene image in the manner of AR.
Furthermore,
The method includes receiving a first video; acquiring a video material by segmenting a target object from the first video; acquiring and displaying a real scene image, where the real scene image is acquired by an image collection apparatus; and displaying the video material at a target position of the real scene image in an augmented manner and playing the video material. Since the video material is acquired by receiving the first video and segmenting the target object from the first video, the video material may be set according to the needs of the user, so as to meet the purpose that the user customizes the loading and displaying of the video material, the customized video material is displayed on the real scene image in a manner of augmented reality, forming the video effect that align with the user’s conception, enhancing the flexibility of the video creation, and improving the video expressiveness.
S201: receiving a first video.
S202: acquiring a video material by segmenting a target object from the first video.
S203: acquiring and displaying a real scene image, where the real scene image is acquired by an image collection apparatus.
S204: receiving a second user instruction, the second user instruction includes screen coordinate information.
Specifically, the second user instruction is an instruction inputted by the user through the UI of the terminal device and used to display the video material in a specified position.
In an embodiment, the second user instruction further includes at least one of the following: size information and angle information. The size information is used to indicate a display size of the video material; and the angle information is used to indicate a display angle of the video material relative to an image collecting plane, where the image collection plane is a plane where the image collecting apparatus is located.
In an embodiment, the second user instruction may be implemented through different operation gestures. For example, the user characterizes the size information through the relative movement of the finger (such as two fingers), so as to adjust the display size of the video material, of course, the user may also characterize the angle information through the operation gesture, where the angle information is information that controls the display angle of video material, when the video material is a two-dimensional flat video, the angle information is used to characterize a display angle of the two-dimensional flat video relative to the image collection plane; when the video material is a three-dimensional stereoscopic video, the angle information is used to characterize a display angle of the three-dimensional stereoscopic video in a three-dimensional space. The operation gestures can be implemented, for example, by rotating a single finger, or by tapping, etc., the specific gestures may be set up as required, which will not be elaborated here.
S205: determining a real scene coordinate point corresponding to the screen coordinate information in a current real scene image.
After acquiring the screen coordinate information according to the second user instruction, determining the real scene coordinate point of the real scene image displayed on the screen, the real scene coordinate point corresponds to the screen coordinate information. The real scene coordinate point is used to characterize the position of the real scene in the shooting environment.
S206: determining a target position of the video material in the real scene image according to the real scene coordinate point.
In an embodiment, as shown in
S2061: acquiring, according to a simultaneous localization and mapping (Slam) algorithm, a simultaneous localization and mapping plane corresponding to the real scene image, where the simultaneous localization and mapping plane is used to characterize a localization model of a real scene in the real scene image.
Specifically, the Slam algorithm is a method used to solve the problem of positioning, navigation, and map construction in a unknown environment, in this embodiment, the Slam algorithm is used to process the real scene image information corresponding to the shooting environment, achieve positioning between different real objects in the shooting environment, and obtain a Slam plane, which is used to characterize the position relationship between the real scene image and the real object in the shooting environment, that is, the positioning model of the real object in the real scene. The Slam algorithm and the specific implementation for generating the Slam plane through the Slam algorithm are prior art, which will not be elaborated here.
S2062: determining, according to the localization model of the real scene in the real scene image represented by the simultaneous localization and mapping plane, a simultaneous localization and mapping plane coordinate point corresponding to the real scene coordinate point.
S2063: determining the target position according to the simultaneous localization and mapping plane coordinate point.
Specifically, according to the positioning model characterized by the Slam plane, the position of the real scene coordinate point in the Slam plane, i.e., the Slam coordinate point, may be determined, and when determining the target position of the video material, the displaying of the video material in the real scene image may be achieved by using the Slam coordinate point as the target position.
S207: playing, according to the angle information, the video material at a display angle corresponding to angle information.
Specifically, the angle information is used to indicate a display angle of the video material relative to an image collection plane, where the image collection plane is a plane where the image collecting apparatus is located. Since the real scene image collected by the terminal device changes in real-time with the position and shooting angle of the terminal device, for example, after the terminal device moves or rotates at a certain angle in the three-dimensional space, the display angle of the real scene image shot and displayed by the terminal device also change, correspondingly, the display angle of the video material displayed in the real scene image will also change with it, and the video material is played at the corresponding display angle.
S208: acquiring an audio material corresponding to the video material, and playing the audio material simultaneously according to a playing timestamp of the video material while displaying the video material at the target position of the real scene image.
Exemplarily, the video material is acquired from the first video, the video material includes an audio material corresponding to the playing timestamp of the video material and having the same playing duration in the first video. The audio material may be played simultaneously with the video material according to the playing timestamp of the video material, thereby restoring the effect of the video material in the first video to a greater extent.
S209: acquiring playing information of the video material, where the playing information is used to characterize a playing progress of the video material, and replaying the video material at the target position if it is determined that the playing of the video material has been completed according to the playing information.
Exemplarily, during the playing process of the video material, according to the playing information, determining whether the playing of the video material has been completed according to the playing information of the video material, such as a current playing duration, a current playing timestamp, or identification information indicating the playing of the video material has been completed, and continuing to replay the video material if the playing of the video material has been completed, so as to avoid the overall performance effect of the video resulted by stopping the playing of the video material.
In the embodiment, the steps S201-S203 are consistent with the steps S101-S103 in the above embodiments, for detailed discussions, please refer to the discussion in steps S101-S103, which will not be elaborated here.
Corresponding to the display method based on augmented reality of the above embodiments,
In one embodiment of the present disclosure, the video material includes a portrait video material, the portrait video material is obtained by performing video portrait segmentation on the first video.
In one embodiment of the present disclosure, the receiving unit 31 is further configured to: receive a second user instruction, where the second user instruction includes screen coordinate information; determine a real scene coordinate point corresponding to the screen coordinate information in a current real scene image; and determine the target position of the video material in the real scene image according to the real scene coordinate point.
In one embodiment of the present disclosure, the second user instruction further includes at least one of the following: size information and angle information; where the size information is used to indicate a display size of the video material; and the angle information is used to indicate a display angle of the video material relative to an image collection plane, where the image collection plane is a plane where the image collecting apparatus is located.
In one embodiment of the present disclosure, when playing the video material, the displaying unit 33 is specifically configured to play, according to the angle information, the video material at a display angle corresponding to the angle information.
In one embodiment of the present disclosure, when determining the target position of the video material in the real scene image according to the real scene coordinate point, the receiving unit 31 is specifically configured to: acquire, according to a simultaneous localization and mapping algorithm, a simultaneous localization and mapping plane corresponding to the real scene image, where the simultaneous localization and mapping plane is used to characterize a localization model of a real scene in the real scene image; determine, according to the localization model of the real scene in the real scene image represented by the simultaneous localization and mapping plane, a simultaneous localization and mapping plane coordinate point corresponding to the real scene coordinate point; and determine the target position according to the simultaneous localization and mapping plane coordinate point.
In one embodiment of the present disclosure, the acquiring unit 32 is further configured to: acquire an audio material corresponding to the video material; and play the audio material simultaneously according to a playing timestamp of the video material while displaying the video material at the target position of the real scene image.
In one embodiment of the present disclosure, the acquiring unit 32 is further configured to: acquire playing information of the video material, where the playing information is used to characterize a playing progress of the video material; and replay the video material at the target position if it is determined that the playing of the video material has been completed according to the playing information.
The device provided in the embodiment may be used to execute the technical solution of the above method embodiment, and the implementation principles and technical effects therebetween are similar, which will not be repeated in this embodiment.
Referring to
As shown in
In general, the following apparatus may be connected to the I/O interface 905, including: an input apparatus 906, such as a touchscreen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyrometer, etc.; an output apparatus 907, such as a liquid crystal display (LCD), a speaker, a shaker, etc.; a storage apparatus 908, such as a magnetic disk, a hard disk, etc.; and a communication apparatus 909. The communication apparatus 909 allows the electronic device 900 to exchange data with other devices through a wireless or wire communication. Although
In particular, according to the embodiment of the present disclosure, the process described above with reference to the flowchart may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product, which includes a computer program loaded on a computer readable medium, and the computer program includes program code for executing the method shown in a flowchart. In such an embodiment, the computer program may be downloaded and installed from the network through the communication apparatus 909, or installed from the storage apparatus 908, or installed from the ROM 902. When the computer program is executed by the processing apparatus 901, the above functions defined in the method of the embodiment of the present disclosure are executed.
The embodiment of the present disclosure further provides a computer program stored in a readable storage medium, one or more processors of an electronic device may read the computer program from the readable storage medium, the one or more processors execute the computer program to enable the electronic device to execute the solution provided by any of the above embodiments.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium, a computer readable storage medium, or any combination of the above two. The computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read only memory (EPROM or flash memory), a fiber optic, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium containing or storing a program, which may be used by or in combination with an instruction execution system, apparatus, or device. And in the present disclosure, the computer readable signal medium may include data signals transmitted in the baseband or as part of the carrier, in which computer readable program code is carried. Such transmitted data signals may take various forms, including but not limited to electromagnetic signals, optical signals or any suitable combination of the above. The computer readable signal medium may also be any computer readable medium other than the computer readable storage medium, which may transmit, propagate, or transmit programs for use by or in combination with an instruction execution system, apparatus, or device. The program code contained on the computer readable medium may be transmitted using any suitable medium, including an electrical wire, an optical fiber cable, RF (Radio Frequency), etc., or any suitable combination of the foregoing.
The above computer readable medium may be embodied in the above electronic device; and may also exist alone without being assembled into the electronic device.
The above computer readable medium carries one or more programs which, when executed by the electronic device, enables the electronic device to execute the method illustrated in the above embodiments.
The computer program code for implementing operations in the embodiments of the present disclosure may be written in one or more programming languages or the combination thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, and conventional procedural programming languages—such as the “C” language or similar programming languages. The program code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer, or entirely on the remote computer or server. In the scenario involving the remote computer, the remote computer may be connected to the user’s computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
The flowcharts and block diagrams in the accompanying drawings illustrate the architecture, functionality, and operations of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, a program segment, or a portion of code, which includes one or more executable instructions for implementing the specified logical functions. It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the drawing. For example, two blocks shown in succession may, in fact, be executed substantially in parallel, or the blocks may also sometimes be executed in the reverse order, depending on the functions involved. It is also noted that each block in the block diagrams and/or flowcharts, and a combination of the blocks may be implemented by a dedicated hardware-based system for performing specified functions or operations, or by a combination of dedicated hardware and computer instructions.
The involved units described in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner. The name of the unit does not constitute a limitation of the unit itself in some cases, for example, a first obtaining unit may also be described as “obtaining a unit of at least two IP addresses”.
The functions described in the embodiments of the present disclosure may be executed, at least in part, by one or more hardware logic components. For example, unrestrictedly, exemplary types of hardware logic components that may be used include: a field-programmable gate array (FPGA), an application specific integrated circuit (ASIC), an application specific standard product (ASSP), a system on chip (SOC), a complex programmable logic device (CPLD) and so on.
In the context of the present disclosure, a machine readable medium may be a tangible medium that may contain or store a program for use by or in combination with an instruction execution system, apparatus, or device. The machine readable medium may be a machine readable signal medium or a machine readable storage medium. The machine readable storage media may include, but is not limited to an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the machine readable storage medium may include an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read only memory (EPROM or a flash memory), a fiber optic, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In a first aspect, a display method based on augmented reality is provided according to one or more embodiments of the present disclosure, including: receiving a first video; acquiring a video material by segmenting a target object from the first video; acquiring and displaying a real scene image, where the real scene image is acquired by an image collection apparatus; and displaying the video material at a target position of the real scene image in an augmented manner and playing the video material.
According to one or more embodiments of the present disclosure, the video material includes a portrait video material, the portrait video material is obtained by performing video portrait segmentation on the first video.
According to one or more embodiments of the present disclosure, further including: receiving a second user instruction, where the second user instruction includes screen coordinate information; determining a real scene coordinate point corresponding to the screen coordinate information in a current real scene image; and determining the target position of the video material in the real scene image according to the real scene coordinate point.
According to one or more embodiments of the present disclosure, the second user instruction further includes at least one of the following: size information and angle information; where the size information is used to indicate a display size of the video material; and the angle information is used to indicate a display angle of the video material relative to an image collection plane, where the image collection plane is a plane where the image collecting apparatus is located.
According to one or more embodiments of the present disclosure, the playing the video material includes: playing, according to the angle information, the video material at a display angle corresponding to the angle information.
According to one or more embodiments of the present disclosure, the determining the target position of the video material in the real scene image according to the real scene coordinate point includes: acquiring, according to a simultaneous localization and mapping algorithm, a simultaneous localization and mapping plane corresponding to the real scene image, where the simultaneous localization and mapping plane is used to characterize a localization model of a real scene in the real scene image; determining, according to the localization model of the real scene in the real scene image represented by the simultaneous localization and mapping plane, a simultaneous localization and mapping plane coordinate point corresponding to the real scene coordinate point; and determining the target position according to the simultaneous localization and mapping plane coordinate point.
According to one or more embodiments of the present disclosure, the method further includes: acquiring an audio material corresponding to the video material; and playing the audio material simultaneously according to a playing timestamp of the video material while displaying the video material at the target position of the real scene image.
According to one or more embodiments of the present disclosure, the method further includes: acquiring playing information of the video material, where the playing information is used to characterize a playing progress of the video material; and replaying the video material at the target position if it is determined that the playing of the video material has been completed according to the playing information.
In a second aspect, a display apparatus based on augmented reality is provided according to one or more embodiments of the present disclosure, including:
According to one or more embodiments of the present disclosure, the video material includes a portrait video material, the portrait video material is obtained by performing video portrait segmentation on the first video.
According to one or more embodiments of the present disclosure, the receiving unit is further configured to: receive a second user instruction, where the second user instruction includes screen coordinate information; determine a real scene coordinate point corresponding to the screen coordinate information in a current real scene image; and determine the target position of the video material in the real scene image according to the real scene coordinate point.
According to one or more embodiments of the present disclosure, the second user instruction further includes at least one of the following: size information and angle information; where the size information is used to indicate a display size of the video material; and the angle information is used to indicate a display angle of the video material relative to an image collection plane, where the image collection plane is a plane where the image collecting apparatus is located.
According to one or more embodiments of the present disclosure, when playing the video material, the displaying unit is specifically configured to play, according to the angle information, the video material at a display angle corresponding to the angle information.
According to one or more embodiments of the present disclosure, when determining the target position of the video material in the real scene image according to the real scene coordinate point, the receiving unit is specifically configured to: acquire, according to a simultaneous localization and mapping algorithm, a simultaneous localization and mapping plane corresponding to the real scene image, where the simultaneous localization and mapping plane is used to characterize a localization model of a real scene in the real scene image; determine, according to the localization model of the real scene in the real scene image represented by the simultaneous localization and mapping plane, a simultaneous localization and mapping plane coordinate point corresponding to the real scene coordinate point; and determine the target position according to the simultaneous localization and mapping plane coordinate point.
According to one or more embodiments of the present disclosure, the acquiring unit is further configured to: acquire an audio material corresponding to the video material; and play the audio material simultaneously according to a playing timestamp of the video material while displaying the video material at the target position of the real scene image.
According to one or more embodiments of the present disclosure, the acquiring unit is further configured to: acquire playing information of the video material, where the playing information is used to characterize a playing progress of the video material; and replay the video material at the target position if it is determined that the playing of the video material has been completed according to the playing information.
In a third aspect, an electronic device is provided according to one or more embodiments of the present disclosure, including: at least one processor and memory;
In a fourth aspect, a computer readable storage medium is provided according to one or more embodiments of the present disclosure, the computer readable storage medium stores a computer-executable instruction, when the computer-executable instruction is executed by a processor, the display method based on augmented reality described in the first aspect above and various possible designs of the first aspect is implemented.
In a fifth aspect, a computer program product according to one or more embodiments of the present disclosure, which including a computer program, when the computer program is executed by a processor, the display method based on augmented reality described in the first aspect above and various possible designs of the first aspect is implemented.
In a sixth aspect, a computer program according to one or more embodiments of the present disclosure, when the computer program is executed by a processor, the display method based on augmented reality described in the first aspect above and various possible designs of the first aspect is implemented.
The above description is merely better embodiments of the present disclosure and explanations of the technical principles employed. Those skilled in the art should understand that the scope of disclosure involved in the present disclosure is not limited to the technical solutions formed by the specific combination of the above technical features, and should also cover, without departing from the above disclosed concept, other technical solutions formed by any combination of the above technical features or their equivalent features. For example, a technical solution formed by replacing the above features with technical features that have similar functions to those disclosed in the present disclosure (but not limited to).
Additionally, although each operation is depicted in a particular order, which should not be understood as requiring that the operations should be performed in the particular order shown or in a sequential order. Multitask and parallel processing may be advantageous under certain circumstances. Likewise, although the above discussion contains several specific implementation details, those should not be construed as limitations on the scope of the present disclosure. Certain features described in the context of separate embodiments may also be implemented in combination in a single embodiment. Conversely, various features described in the context of a single embodiment may also be implemented in multiple embodiments separately or in any suitable sub-combination.
Although the subject matter has been described in language specific to structural features and/or method logic actions, it should be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or actions described above. Rather, the specific features and actions described above are merely example forms for implementing the claims.
Number | Date | Country | Kind |
---|---|---|---|
202011508594.9 | Dec 2020 | CN | national |
This application is a continuation of International Application No. PCT/SG2021/050721, filed on Nov. 24, 2021, which claims priority to Chinese Patent Application No. 202011508594.9, filed with the China National Intellectual Property Administration on Dec. 18, 2020 and entitled “DISPLAY METHOD AND APPARATUS BASED ON AUGMENTED REALITY, DEVICE, AND STORAGE MEDIUM”. The above applications are incorporated herein by reference in their entities.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/SG2021/050721 | Nov 2021 | WO |
Child | 18332243 | US |