Augmented Reality System Supporting Customized Multi-Channel Interaction

Information

  • Patent Application
  • 20210375061
  • Publication Number
    20210375061
  • Date Filed
    January 25, 2021
    4 years ago
  • Date Published
    December 02, 2021
    3 years ago
Abstract
The embodiments of the present disclosure disclose an augmented reality system that supports customized multi-channel interaction. One embodiment of the augmented reality system comprises: a head-mounted sensor assembly, a computing device, and a display module; the head-mounted sensor assembly is used to capture the user's multi-channel interactive input information and transmit the interactive input information to the computing device; the computing device is used to generate or modify the display content of the augmented reality according to the interactive input information; the display module is used to overlay display the background content with the display content of the augmented reality. The augmented reality system, by arranging the display module to the far end of the head-mounted sensor assembly, can simplify the structure of the head-mounted sensor assembly, and reduce the weight of the head-mounted sensor assembly, providing convenience for installing other sensors. At the same time, the system may incorporate multiple ways of interaction, thereby enriching the system's interaction with the user, and improving the user's experience.
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application claims priority from the Chinese patent application 202010476286.6 filed May 29, 2020, the content of which is incorporated herein in the entirety by reference.


TECHNICAL FIELD

The embodiments of the present disclosure relate to the field of augmented reality, and in particular to an augmented reality system that supports customized multi-channel interaction.


BACKGROUND ART

AR (Augmented Reality) technology is a novel human-computer interaction technology that can add virtual information to the real world, thereby skillfully fusing virtual information with the real world, i.e., “augmenting” the real world.


Existing AR display technology devices mainly include an AR head-mounted device. The AR head-mounted device integrates the display unit into a head-mounted device, using a near-eye display technology. Although the above device achieves the integration of components, it increases the weight of the head-mounted device. Therefore, when the user experiences the device, it is often too heavy that dampens the user's experience.


In addition, the use of the near-eye display technology usually limits the viewing angle of the lens, which will also dampen the user's experience.


Accordingly, this field needs a new augmented reality system to solve the above problem.


Summary Contents


The content of the present disclosure is used to introduce concepts in a brief form, and these concepts will be described in detail in the following embodiments. The content portion of the present disclosure is not intended to identify the key features or essential features of the claimed technical solution, nor is it intended to limit the scope of the claimed technical solution.


To solve the above problem, some embodiments of the present disclosure propose an augmented reality system that supports customized multi-channel interaction, including: a head-mounted sensor assembly, a computing device, and a display module; the head-mounted sensor assembly is used to capture the user's multi-channel interactive input information and transmit the interactive input information to the computing device; the computing device is used to generate or modify the display content of the augmented reality according to the interactive input information; the display module is used to overlay display or superposition display the background content with the display content of the augmented reality, wherein the display module is set to the far end relative to the head-mounted sensor assembly.


In some embodiments, the system further comprises an augmented reality window displayed to the display module, the augmented reality window is used to display the display content of the augmented reality, and the position of the augmented reality window is determined through the posture of the user's head relative to the display module, thereby simulating the real effect of the display content of augmented reality; the shape and size of the augmented reality window are determined by setting the shape of a virtual window, wherein the virtual window is a near-eye window formed by back projection of the augmented reality window.


In some embodiments, the position where the augmented reality window is displayed onto the display module is determined through the following steps: establishing a coordinate system according to the display range of the display module; determining, based on the following formula, the point coordinates of the point on the boundary line where the virtual window is projected to the display module: PAiiPPi; wherein P represents the coordinates of the center point of the two eyes; Pi represents the coordinates of the ith point on the edge of the virtual window; Ai represents the point coordinates of the point where the straight line formed by starting from point P as the origin and passing through point Pi intersects the display module; PAi represents the distance from point P to Ai; PP1 represents the distance from point P to Pi; λi represents the quotient of PAi and PPi, and λi is determined by the Z-axis coordinates of the three points P, Ai, and Pi.


In some embodiments, the computing device determines by the following formula the object in the display content that the eyeball focuses on: λ123>0; wherein λ1 is determined by the following formula:







λ
1

=

{





1
,








(

x
-

x
r


)

2

+


(

y
-

y
r


)

2

+


(

z
-

z
r


)

2




r
2








0
,








(

x
-

x
r


)

2

+


(

y
-

y
r


)

2

+


(

z
-

z
r


)

2




r
2






;






Wherein (x, y, z) are the coordinates of the object in the display content; (xr, yr, zr) are the three-dimensional focus point coordinates of the left and right eye sight direction; r represents the attention radius; wherein the spherical range formed by the attention radius represents the preset range of the focus point; in response to λ1=1, it indicates that the object is in the preset range of the focus point; and in response to λ1=0, it indicates that the object is out of the preset range of the focus point; wherein λ2 is determined through the following formula:







λ
2

=

{





1
,





t


t
d








0
,





t
<

t
d






;






Wherein t represents the time that the focus point continuously selects the object within the preset time period, and td is the preset threshold of the time; in response to λ2=1, it indicates that the time the object has been focused on exceeds the preset threshold; in response to λ2=0, it indicates that the time the object has been focused on is less than the preset threshold; wherein the specific formula of λ3 is as follows:







λ
3

=

{





1
,






x
min


x



x
max






and






y
min



y


y
max








0
,






x
min

>

x





or





x

>


x
max






or






y
min


>

y





or





y

>

y
max






;






Wherein (xmin, xmax) represents the minimum and maximum values of the augmented reality display window in the x-axis direction, (ymin, ymax) represents the minimum and maximum values of the augmented reality display window in the y-axis direction; λ3=1 means that the object is inside the augmented reality display window; λ3=0 means that the object is outside the augmented reality display window.


In some embodiments, when the number of objects in the display content that the eyeballs have focused on is multiple, the computing device determines through the following steps the order in which the multiple objects are displayed: for the determined objects that the eyeballs have focused on, the probability of each object being displayed is determined through the following formula:








P
i

=

1
-

min


(

1
,


d
i

r


)




;




Wherein Pi represents the probability of the ith object being displayed; di represents the distance between the ith object and the gaze point; r represents the radius of the spherical area; based on the probability of each of the objects being displayed, determine the object to be displayed first.


In some embodiments, in response to the display module displaying multiple augmented reality windows, and the object of the display content being within the range of the multiple augmented reality windows or the object being selected by multiple interactive operations, the computing device displays one by one the objects modified by each of the interactive operations.


In some embodiments, the background content comprises acquired images or videos, and generated virtual dynamic or static scenes.


In some embodiments, the augmented reality display content comprises images or videos, generated virtual dynamic or static scenes or objects, as well as visualization effects of interactive operations. The augmented reality display content can be modified in real time by interactive operations, and only the part of it within the range of the augmented reality window is displayed.


In some embodiments, the head-mounted sensor assembly comprises at least one of the following: a camera, a microphone, a gesture sensor, and an eye movement tracking sensor, and the interactive operation characterized by the interactive input information comprises at least one of the following: a gesture interactive operation, a voice interactive operation and an eye movement interactive operation.


In some embodiments, the display mode of the display module comprises a two-dimensional plane display and a three-dimensional space display. When the display module performs the two-dimensional plane display, the display module comprises at least one of the following: a computer monitor, a tablet computer, and a screen projection; when the display module performs the three-dimensional space display, the display module comprises at least one of the following: a 3D projector and a 3D display.


One of the above embodiments of the present disclosure has the following beneficial effects: by arranging the display module to the far end of the head-mounted sensor assembly, the structure of the head-mounted sensor assembly can be simplified, and the weight of the head-mounted sensor assembly can be reduced, providing convenience for installing other sensors. Furthermore, by setting different sensors, the interaction between the system and the user is enriched, and the user experience is improved. In addition, compared with near-eye display, the display module can reduce the limitation of the lens viewing angle and improve the user's experience.





DESCRIPTION OF FIGURES

The above and other features, advantages and aspects of the embodiments of the present disclosure will become more apparent in conjunction with the drawings and with reference to the following embodiments. Throughout the drawings, the same or similar reference signs indicate the same or similar elements. It should be understood that the drawings are schematic, and the components and elements are not necessarily drawn to scale.



FIG. 1 is a schematic diagram of some structures of an augmented reality system supporting customized multi-channel interaction according to the present disclosure;



FIG. 2 is a schematic diagram of some further structures of an augmented reality system supporting customized multi-channel interaction according to the present disclosure;



FIG. 3 is an application scenario diagram of an augmented reality system supporting customized multi-channel interaction used by more than one person according to the present disclosure.





DETAILED DESCRIPTION

Embodiments of the present disclosure will be described hereinafter in more detail with reference to the accompanying drawings. Although certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure can be implemented in various forms and shall not be construed as being limited to the embodiments set forth herein. On the contrary, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are used only for exemplary purposes, not to limit the protection scope of the present disclosure.


In the description of the present disclosure, it should be noted that such terms as “installation”, “connected with” and “connected to” should be understood in a broad sense, unless otherwise clearly specified and limited. For example, the connection can be fixed, or detachable, or integral; it can be a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium, and it can be an internal connection between two components. For a person having ordinary skill in the art, the specific meanings of the above-mentioned terms in the present disclosure shall be understood in specific situations.


In addition, it should be noted that, for ease of description, the drawings only show the parts related to the relevant disclosure. In the case of no conflict, the embodiments in the present disclosure and the features in the embodiments can be combined with each other. Hereinafter, the present disclosure will be described in detail with reference to the drawings and in conjunction with embodiments.


Besides, in the description of the present disclosure, the terms “up”, “down”, “left”, “right”, “in”, “out” and other terms indicating directions or positional relationships are based on the directions or positional relationship shown in the drawings, for ease of description only, instead of indicating or implying that the device or element must have a specific orientation, be constructed and operated in a specific orientation, and therefore shall not be construed as a limitation to the present invention.


It should be noted that such concepts as “first” and “second” mentioned in the present disclosure are only used to distinguish different devices, modules or units, not to limit the order of functions performed by these devices, modules or units or the interdependence thereof.


It should be noted that such modifications as “one” and “more” mentioned in the present disclosure are illustrative and not restrictive. Those skilled in the art shall understand that, unless clearly stated otherwise in the context, they should be understood as “one or more”.


The names of messages or information exchanged between multiple devices in the embodiments of the present disclosure are used only for illustrative purposes, not to limit the scope of these messages or information.


Hereinafter, the present disclosure will be described in detail with reference to the drawings and in conjunction with embodiments.


Firstly refer to FIG. 1. FIG. 1 is a schematic diagram of some structures of an augmented reality system supporting customized multi-channel interaction according to the present disclosure. As shown in FIG. 1, the head-mounted sensor assembly 1 is used to capture the user's multi-channel interactive input information, and transmit the collected interactive input information to the computing device 2. The computing device 2 generates or modifies the display content of augmented reality according to the aforementioned interactive input information, and transmits it to the display module 5. The display module 5 superimposes and displays the background content and the computed augmented reality display content. It should be noted that, different from displaying onto the head-mounted display device in the prior art, the above display module and head-mounted sensor are set to the far end at intervals. For example, the display module 5 may be a computer monitor, a plasma display screen, a liquid crystal display screen, and so on. Those skilled in the art can make a selection according to the actual situation. No limitation is made here.


To be specific, the aforementioned background content may be an image or video displayed in the display module 5. It can also be a virtual dynamic or static scene generated by conventional technical means. Taking the generation of a dynamic scene as an example, first, the elements of this dynamic scene may be determined, namely the target, background, and target movement and the like. Next, the path and angle of movement of the target and so on are determined. In the end, the required dynamic scene image is generated. Those skilled in the art can generate the aforementioned virtual dynamic scene or static scene according to the prior art or conventional technical means.


The above augmented reality display content may include images or videos, generated virtual dynamic or static scenes or objects, and visualization effects of interactive operations. The interactive operations may be human-computer interactive operations performed by the user with the system through the head-mounted sensor. Specifically, the interactive operations may include at least one of the following: gesture interactive operation, voice interactive operation, and eye movement interactive operation. In order to complete the above operations, relevant sensors can be provided on the head-mounted sensor assembly to collect information. The sensor assembly may include a camera, a microphone, a gesture sensor, and an eye movement tracking sensor. The user's interactive input information is acquired through the above-mentioned sensors.


Further, the corresponding visualization effects can be presented through the above interactive operations. For example, when a gesture sensor is used, through the acquired user's gesture interactive input information, the effect of the hand moving an object in the augmented reality display content can be simulated and etc.


In response to receiving the aforementioned interactive input information, the computing device implements the corresponding interactive operations. In other words, the augmented reality display content can be modified in real time by the computing device through interactive operations, and only the part of it within the augmented reality window is displayed. In this way, the function of the computing device generating or modifying the content of the augmented reality is realized. Those skilled in the art can choose among existing related sensors to implement the above interactive operations by conventional technical measures.


Next, the system will be described with reference to FIG. 1 and FIG. 2. FIG. 2 is a schematic diagram of some further structures of an augmented reality system supporting customized multi-channel interaction according to the present disclosure. As shown in FIG. 1 and FIG. 2, the augmented reality system further comprises an augmented reality window 3 displayed to the display module 5. The augmented reality window 3 is used to display the display content of the augmented reality. To be specific, the shape and size of the augmented reality window 3 can be set by users themselves or the system defaults. To be specific, the user inputs the shape and size of the augmented reality window 3 through a human-computer interactive operation interface. The shape can be round, rectangular, etc. The size may be the side length of the above shape, the center of a circle, and so on.


Further, the augmented reality window 3 may be back-projected to the near-eye end of the user to form a virtual window 4. The shape of the virtual window 4 is the same as the shape of the augmented reality window 3. The distance between the virtual window 4 and the user affects the size of the augmented reality window 3. That is, when the virtual window 4 is close to the user, the size of the augmented reality window 3 presented to the display module 5 becomes larger. Therefore, without the user moving, the size of the augmented reality window 3 can be adjusted by adjusting the distance between the virtual window 4 and the user.


The position where the augmented reality window 3 is displayed to the display module 5 is determined through the following steps:


The first step is to establish a coordinate system according to the display range of the display module 5.


In some embodiments, a point can be selected in the display range of the display module 5 as the origin of coordinates. Further, a three-axis coordinate system is established on the basis of the origin. As an example, the upper left corner of the display range of the display module 5 may be determined as the origin of coordinates, and the length direction may be the X-axis, the height direction is taken as the Y axis, the vertical direction between the user and the display module 5 is the Z axis, thereby forming a three-axis coordinate system.


The second step is to determine according to the following formula the point coordinates of the point on the boundary line where the virtual window 4 is projected to the display module 5:






PA
iiPPi;


Wherein P represents the coordinates of the center point of the two eyes, wherein the coordinates of the point P can be determined through related algorithms. For example, the PNP (Perspective-n-Point) algorithm or the EPNP (Efficient Perspective-n-Point) algorithm can obtain the center point coordinates of the two eyes through key point detection, coordinate system conversion, and so on.


Pi represents the coordinates of the ith point on the edge of the virtual window 4, wherein the size of the augmented reality window 3 can be adjusted by adjusting the distance between the virtual window 4 and the user without the user moving. When determining the size of the augmented reality window, the distance from the virtual window 4 to the user is known. In turn, the coordinates of the ith point on the edge of the virtual window 4 can be determined through the size, shape, and P point coordinates of the virtual window 4.


Ai represents the point coordinates of the point where the straight line formed by starting from point P as the origin and passing through point Pi intersects the display module 5;


PAi represents the distance from point P to Ai;


PPi represents the distance from point P to Pi, wherein the value of PPi can be determined through the coordinates of point P and the coordinates of point Pi;


λi represents the quotient of PAi and PPi, and λi is determined by the Z-axis coordinates of the three points P, Ai, and Pi. To be specific, the distance from the virtual window 4 to the user is known, and the distance from the display module to the user can be determined through the coordinates of point P.


The value of PAi can be calculated by the above formula. Since the point Ai is presented on the display module 5, the coordinates of point Ai can be determined according to the value of PAi.


In the end, the point coordinates of multiple points on the above-mentioned boundary can determine the display range of the augmented reality window and the display position on the display module.


Further, the computing device may determine the object in the display content that the user's eyeballs are focusing on by using the following formula:





λ123>0;


Wherein λ1 is determined by the following formula:







λ
1

=

{





1
,








(

x
-

x
r


)

2

+


(

y
-

y
r


)

2

+


(

z
-

z
r


)

2




r
2








0
,








(

x
-

x
r


)

2

+


(

y
-

y
r


)

2

+


(

z
-

z
r


)

2




r
2






;






Wherein (x, y, z) are the coordinates of the object in the display content, wherein the coordinates of the display content object can be determined in a variety of ways, for example, the coordinates can be determined by a target detection algorithm or a related spatial coordinate algorithm. In addition, the technician can also preset the coordinates of the object in advance.


(xr, yr, zr) are the three-dimensional focus point coordinates of the left and right eye sight direction, where the three-dimensional focus point coordinates can be determined by the existing tracking sensors.


r represents the attention radius; wherein the spherical range formed by the attention radius represents the preset range of the focus point, wherein the above attention radius may be determined by a technician through a lot of experiments. It can also be set by the computing device by default.


In response to λ1=1, it indicates that the object is in the preset range of the focus point.


In response to λ1=0, it indicates that the object is out of the preset range of the focus point.


Wherein λ2 is determined through the following formula:







λ
2

=

{





1
,





t


t
d








0
,





t
<

t
d






;






Wherein t represents the time that the focus point continuously selects the object within the preset time period, which may be that the focus point overlaps the display range of the object, td is the preset threshold of the time, wherein the preset time and the preset threshold can be determined by a technician through a large number of experiments, or set by the computing device by default.


In response to λ2=1, it indicates that the time the object has been focused on exceeds a preset threshold;


In response to λ2=0, it indicates that the time the object has been focused on is less than a preset threshold;


Wherein the specific formula of λ3 is as follows:







λ
3

=

{





1
,






x
min


x



x
max






and






y
min



y


y
max








0
,






x
min

>

x





or





x

>


x
max






or






y
min


>

y





or





y

>

y
max






;






Wherein (xmin, xmax) represents the minimum and maximum values of the augmented reality display window in the x-axis direction, (ymin, ymax) represents the minimum and maximum values of the augmented reality display window in the y-axis direction;


λ3=1 means that the object is inside the augmented reality display window;


λ3=0 means that the object is outside the augmented reality display window.


Furthermore, in response to when the number of objects in the display content that the eyeballs have focused on is multiple, the computing device determines through the following steps the order in which the multiple objects are displayed:


For the determined objects that the eyeballs have focused on, the probability of each object being displayed is determined by the following formula:








P
i

=

1
-

min






(

1
,


d
i

r


)




;




Wherein Pi represents the probability of the ith object being displayed;


di represents the distance between the ith object and the gaze point; and


The object to be displayed first is determined according to the probability of each of the objects being displayed.


In some modes of implementation of certain optional embodiments, responsive to the display module displaying a plurality of augmented reality windows, and the object of the display content being in the range of the plurality of augmented reality windows or the object being selected by a plurality of the interactive operations, the computing device displays one by one the objects modified by each of the interactive operations, and displays them in each of the aforementioned augmented reality windows. Next, description will be made in combination with FIG. 3. FIG. 3 is an application scenario diagram of an augmented reality system supporting customized multi-channel interaction used by more than one person according to the present disclosure. When the display module shows two augmented reality windows (31 and 32 in the figure). When the display content of each of the above augmented reality windows comprises the same object 6, and when the object 6 is selected or modified by the user's interactive operation, the scenes when the above interactive operations change the object 6 can be displayed in sequence according to the set factors. The above-mentioned set factors may be the time when the user's interactive operations occur, or may be a sequence set for different users in advance.


In the augmented reality system disclosed in some embodiments of the present disclosure, the virtual window formed by back-projection of the augmented reality window can adjust the size of the augmented reality window by adjusting the distance between the virtual window and the user. In this way, the user can customize the size of the augmented reality window to improve the user's experience.


In addition, as the user's perspective changes, the position of the augmented reality window displayed at the display module may change. By constructing a three-axis coordinate system on the display module, and at the same time, based on the coordinates of the virtual window and the user, the position of the point of the augmented reality window edge on the display module can be accurately determined, thereby determining the position of the augmented reality window in the display module. Therefore, a reasonable position of the augmented reality window on the display module is displayed according to the change of the user's perspective, which can more vividly and appropriately display the content of augmented reality, thus improving the user's experience.


Besides, the object in the display content that the user's eyes have focused on is determined by determining the preset range of the user's focus point, the time of the object being focused on, and whether the object is within the range of the augmented reality display window. In this way, the object which the user has focused on can be accurately determined, and thereby displayed precisely, thus improving the accuracy of the system as well as the user's experience.


The above description is only a preferred embodiment of the present disclosure and an explanation of the applied technical principles. Those skilled in the art shall understand that the scope of the invention involved in the present disclosure is not limited to the technical solutions formed by the specific combination of the above technical features, but should also cover other technical solutions formed by any combination of the above technical features or their equivalents without departing from the inventive concept, for example, technical solutions formed by mutually replacing the above features with the technical features disclosed in the present disclosure (but not limited to) with similar functions.

Claims
  • 1. An augmented reality system supporting customized multi-channel interaction, wherein the augmented reality system comprises a head-mounted sensor assembly, a computing device, and a display module; the head-mounted sensor assembly is used to capture a multi-channel interactive input information of a user and transmit the interactive input information to the computing device;the computing device is used to generate or modify a display content of an augmented reality according to the interactive input information;the display module is used to overlay display a background content with the display content of the augmented reality, wherein the display module is set to a far end relative to the head-mounted sensor assembly.
  • 2. The augmented reality system according to claim 1, wherein the system further comprises an augmented reality window displayed to the display module, the augmented reality window is used to display the display content of the augmented reality, and a position of the augmented reality window is determined through a posture of a head of the user relative to the display module, thereby simulating a real effect of the display content of the augmented reality; a shape and size of the augmented reality window are determined by setting a shape of a virtual window, wherein the virtual window is a near-eye window formed by back projection of the augmented reality window.
  • 3. The augmented reality system according to claim 2, wherein the position where the augmented reality window is displayed onto the display module is determined through the following steps: establishing a coordinate system according to a display range of the display module;determining, based on the following formula, point coordinates of a point on a boundary line where the virtual window is projected to the display module: PAi=λiPPi;wherein P represents coordinates of a center point of two eyes;Pi represents coordinates of an ith point on an edge of the virtual window;Ai represents point coordinates of a point where a straight line formed by starting from point P as an origin and passing through point Pi intersects the display module;PAi represents a distance from point P to Ai;PPi represents a distance from point P to Pi;λi represents a quotient of PAi and PPi, and λi is determined by Z-axis coordinates of three points P, Ai, and Pi.
  • 4. The augmented reality system according to claim 1, wherein the computing device determines by the following formula an object in the display content that an eyeball focuses on: λ1*λ2*λ3>0;wherein λ1 is determined by the following formula:
  • 5. The augmented reality system according to claim 4, wherein when a number of the objects in the display content that the eyeballs have focused on is multiple, the computing device determines through the following steps an order in which the multiple objects are displayed: for the determined objects that the eyeballs have focused on, a probability of each object being displayed is determined through the following formula:
  • 6. The augmented reality system according to claim 1, wherein in response to the display module displaying multiple augmented reality windows, and the objects of the display content being within a range of the multiple augmented reality windows or the objects being selected by multiple interactive operations, the computing device displays one by one the objects modified by each of the interactive operations.
  • 7. The augmented reality system according to claim 6, wherein the background content comprises acquired images or videos, and generated virtual dynamic or static scenes.
  • 8. The augmented reality system according to claim 6, wherein the augmented reality display content comprises images or videos, generated virtual dynamic or static scenes or objects, as well as visualization effects of interactive operations, the augmented reality display content can be modified in real time by interactive operations, and only a part of it within the range of the augmented reality window is displayed.
  • 9. The augmented reality system according to claim 1, wherein the head-mounted sensor assembly comprises at least one of the following: a camera, a microphone, a gesture sensor, and an eye movement tracking sensor, and the interactive operation characterized by the interactive input information comprises at least one of the following: a gesture interactive operation, a voice interactive operation and an eye movement interactive operation.
  • 10. The augmented reality system according to claim 1, wherein a display mode of the display module comprises a two-dimensional plane display and a three-dimensional space display, and when the display module performs the two-dimensional plane display, the display module comprises at least one of the following: a computer monitor, a tablet computer, and a screen projection; when the display module performs the three-dimensional space display, the display module comprises at least one of the following: a 3D projector and a 3D display.
  • 11. The augmented reality system according to claim 2, wherein in response to the display module displaying multiple augmented reality windows, and the objects of the display content being within a range of the multiple augmented reality windows or the objects being selected by multiple interactive operations, the computing device displays one by one the objects modified by each of the interactive operations.
  • 12. The augmented reality system according to claim 3, wherein in response to the display module displaying multiple augmented reality windows, and the objects of the display content being within a range of the multiple augmented reality windows or the objects being selected by multiple interactive operations, the computing device displays one by one the objects modified by each of the interactive operations.
  • 13. The augmented reality system according to claim 3, wherein in response to the display module displaying multiple augmented reality windows, and the objects of the display content being within a range of the multiple augmented reality windows or the objects being selected by multiple interactive operations, the computing device displays one by one the objects modified by each of the interactive operations.
  • 14. The augmented reality system according to claim 4, wherein in response to the display module displaying multiple augmented reality windows, and the objects of the display content being within a range of the multiple augmented reality windows or the objects being selected by multiple interactive operations, the computing device displays one by one the objects modified by each of the interactive operations.
  • 15. The augmented reality system according to claim 5, wherein in response to the display module displaying multiple augmented reality windows, and the objects of the display content being within a range of the multiple augmented reality windows or the objects being selected by multiple interactive operations, the computing device displays one by one the objects modified by each of the interactive operations.
Priority Claims (1)
Number Date Country Kind
202010476286.6 May 2020 CN national