The disclosure relates to augmented reality technology, in particular to a confidential protected method and a system for remote assistant in augmented reality environment.
A remote assistant in augmented reality session can share an on-site scene, but some confidential information can also be shared, this is not optimal.
Implementations of the present technology will now be described, by way of embodiment, with reference to the attached figures, wherein:
It will be appreciated that for simplicity and clarity of illustration, where appropriate, reference numerals have been repeated among the different figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein can be practiced without these specific details. In other instances, methods, procedures, and components have not been described in detail so as not to obscure the related relevant feature being described. Also, the description is not to be considered as limiting the scope of the embodiments described herein. The drawings are not necessarily to scale and the proportions of certain parts may be exaggerated to better illustrate details and features of the present disclosure.
Several definitions that apply throughout this disclosure will now be presented.
The term “coupled” is defined as connected, whether directly or indirectly through intervening components, and is not necessarily limited to physical connections. The connection may be such that the objects are permanently connected or releasably connected. The term “substantially” is defined to be essentially conforming to the particular dimension, shape, or other feature that the term modifies, such that the component need not be exact. The term “comprising,” when utilized, is “including, but not necessarily limited to”; it specifically indicates open-ended inclusion or membership in the so-described combination, group, series, and the like. References to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean “at least one.”.
As shown in
In step S202, part of the spatial model can be shaded by the mask model. The AR device 101 and the remote device 103 each log in with their respective IDs. The server 102 provides the mask model corresponding to the ID permissions and thereby masks part of the spatial model.
Out of all users taking part, one or more users may not have permission to access a specific area when multiple users participate in the same video call. The server 102 provides a spatial model that complies with all user rights, and provides a mask model to mask information from users not permitted to see information beyond their rights. For the example, the spatial model can distinguish five areas: area A, area B, area C, area D and area E. Users can be divided into three classes of user ID, general employee, manufacturer, and customer. General employee can access area A, area B, area C, and area D, but not area E. Manufacturer can access area A and area B, the mask model obscures area C, area D, and area E. Customers can access area A and area C, the mask model hides area B, area D, and area E from the customer.
The server 102 provides a mask model corresponding to the lowest authority or permission when multiple users are joined into the same video call at the same time. For example, area B, area D, area E are shadowed by the mask model if the server 102 determines that the participants of a first meeting are general employees or customers, according to their user IDs. Area B, area C, area D, area E are shadowed by mask model when the server 102 determines that the participants of a second meeting include general employees, manufacturer, and customers, according to their user IDs.
In step S301, the on-site scene is obtained through a camera unit.
In step S302, the mask model is obtained from the server.
In step S303, the scene which is viewed is combined with the on-site scene and the mask model.
In step S304, the viewed scene is shared to the remote device.
The AR device obtains the on-site scene through the camera unit. A location of the AR device and the mask model in the AR environment are obtained from the server. The mask model is a virtual object in the AR environment. Parameters of the object include a location of the mask model, a size of the mask model, and a shape of the mask model. The viewed scene is combined with the on-site scene and the mask model which shows in a display unit in the AR device. The viewed scene is shared to the remote device. The confidential areas are covered and obscured by the mask model, thus confidential information is not included in data of the scene which is shared to the remote device.
As shown in
As shown in
For the examples as shown in
The user interface unit of the AR device and of the remote device each include a keyboard, a mouse, a touch panel, remote controller, voice control unit, and a gesture recognition unit, but not limited to these.
The camera unit 501 captures image of scene on-site. The communication unit 502 communicates with the server. The AR device 500 obtains the mask model from the server through the communication unit 502. Spatial information of the first object is transmitted to the server when the first object is marked by the remote device. The spatial information of the first object is a distance, an orientation, or an angle between the AR device and the first object. The spatial information of the first object is obtained from the server through the communication unit and is displayed in the display unit 504. The AR device 500 shares the viewed scene with the remote device through the communication unit 502. The viewed scene includes the on-site scene, but also the mask model and the second object which were created by the AR device.
The processing unit 503 combines the on-site scene and the mask model as the scene which is viewable in the display unit 504. The processing unit 503 processes gesture recognition according to the scene which is captured by the camera unit 501. The second object is controlled according to the gestures of the user. For example, a state of the second object in the viewed scene can be controlled according to gestures, such as add to, move to (a position), and delete, etc.
The inertial measurement unit 505 obtains parameters of any movement. The parameters of moving includes a moving direction, a moving distance, a moving height, and a moving angle. The processing unit 503 adjusts the spatial relationship of the mask model, and of the first object and the second object according to the parameters of moving. 3D viewing and authenticity of user experience of the virtual objects in AR environment are provided or are improved.
The embodiments shown and described above are only examples. Therefore, many details of such art are neither shown nor described. Even though numerous characteristics and advantages of the technology have been set forth in the foregoing description, together with details of the structure and function of the disclosure, the disclosure is illustrative only, and changes may be made in the detail, especially in matters of shape, size, and arrangement of the parts within the principles of the present disclosure, up to and including the full extent established by the broad general meaning of the terms used in the claims. It will, therefore, be appreciated that the embodiments described above may be modified within the scope of the claims.