The present disclosure generally relates to displaying virtual objects.
Some devices are capable of generating and presenting graphical environments that include virtual objects and/or representations of physical elements. These environments may be presented on mobile communication devices.
So that the present disclosure can be understood by those of ordinary skill in the art, a more detailed description may be had by reference to aspects of some illustrative implementations, some of which are shown in the accompanying drawings.
In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method, or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures.
Various implementations disclosed herein include devices, systems, and methods for determining a placement of virtual objects in a collection of virtual objects when changing from a first viewing arrangement to a second viewing arrangement based on their respective positions in one of the viewing arrangements. In some implementations, a method includes displaying a set of virtual objects in a first viewing arrangement in a first region of an extended reality (XR) environment that is bounded. The set of virtual objects are arranged in a first spatial arrangement. A user input corresponding to a request to change to a second viewing arrangement in a second region of the XR environment is obtained. A mapping is determined between the first spatial arrangement and a second spatial arrangement based on spatial relationships between the set of virtual objects. The set of virtual objects is displayed in the second viewing arrangement in the second region of the XR environment that is unbounded.
In accordance with some implementations, a device includes one or more processors, a non-transitory memory, and one or more programs. In some implementations, the one or more programs are stored in the non-transitory memory and are executed by the one or more processors. In some implementations, the one or more programs include instructions for performing or causing performance of any of the methods described herein. In accordance with some implementations, a non-transitory computer readable storage medium has stored therein instructions that, when executed by one or more processors of a device, cause the device to perform or cause performance of any of the methods described herein. In accordance with some implementations, a device includes one or more processors, a non-transitory memory, and means for performing or causing performance of any of the methods described herein.
Numerous details are described in order to provide a thorough understanding of the example implementations shown in the drawings. However, the drawings merely show some example aspects of the present disclosure and are therefore not to be considered limiting. Those of ordinary skill in the art will appreciate that other effective aspects and/or variants do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices, and circuits have not been described in exhaustive detail so as not to obscure more pertinent aspects of the example implementations described herein.
A person can interact with and/or sense a physical environment or physical world without the aid of an electronic device. A physical environment can include physical features, such as a physical object or surface. An example of a physical environment is physical forest that includes physical plants and animals. A person can directly sense and/or interact with a physical environment through various means, such as hearing, sight, taste, touch, and smell. In contrast, a person can use an electronic device to interact with and/or sense an extended reality (XR) environment that is wholly or partially simulated. The XR environment can include mixed reality (MR) content, augmented reality (AR) content, virtual reality (VR) content, and/or the like. With an XR system, some of a person's physical motions, or representations thereof, can be tracked and, in response, characteristics of virtual objects simulated in the XR environment can be adjusted in a manner that complies with at least one law of physics. For instance, the XR system can detect the movement of a user's head and adjust graphical content and auditory content presented to the user similar to how such views and sounds would change in a physical environment. In another example, the XR system can detect movement of an electronic device that presents the XR environment (e.g., a mobile phone, tablet, laptop, or the like) and adjust graphical content and auditory content presented to the user similar to how such views and sounds would change in a physical environment. In some situations, the XR system can adjust characteristic(s) of graphical content in response to other inputs, such as a representation of a physical motion (e.g., a vocal command).
Many different types of electronic systems can enable a user to interact with and/or sense an XR environment. A non-exclusive list of examples include heads-up displays (HUDs), head mountable systems, projection-based systems, windows or vehicle windshields having integrated display capability, displays formed as lenses to be placed on users' eyes (e.g., contact lenses), headphones/earphones, input systems with or without haptic feedback (e.g., wearable or handheld controllers), speaker arrays, smartphones, tablets, and desktop/laptop computers. A head mountable system can have one or more speaker(s) and an opaque display. Other head mountable systems can be configured to accept an opaque external display (e.g., a smartphone). The head mountable system can include one or more image sensors to capture images/video of the physical environment and/or one or more microphones to capture audio of the physical environment. A head mountable system may have a transparent or translucent display, rather than an opaque display. The transparent or translucent display can have a medium through which light is directed to a user's eyes. The display may utilize various display technologies, such as uLEDs, OLEDs, LEDs, liquid crystal on silicon, laser scanning light source, digital light projection, or combinations thereof. An optical waveguide, an optical reflector, a hologram medium, an optical combiner, combinations thereof, or other similar technologies can be used for the medium. In some implementations, the transparent or translucent display can be selectively controlled to become opaque. Projection-based systems can utilize retinal projection technology that projects images onto users' retinas. Projection systems can also project virtual objects into the physical environment (e.g., as a hologram or onto a physical surface).
The present disclosure provides methods, systems, and/or devices for determining a placement of virtual objects in a collection of virtual objects when changing from a first viewing arrangement to a second viewing arrangement based on their respective positions in one of the viewing arrangements.
In various implementations, an electronic device, such as a smartphone, tablet, or laptop or desktop computer, displays virtual objects in an extended reality (XR) environment. The virtual objects may be organized in collections. Collections can be viewed in various viewing arrangements. One such viewing arrangement presents the virtual objects on two-dimensional virtual surfaces. Another viewing arrangement presents the virtual objects on a region of the XR environment that may be associated with a physical element. Requiring a user to arrange the virtual objects in each viewing arrangement may increase the amount of effort the user expends to organize and view the virtual objects. Interpreting and acting upon user inputs that correspond to the user manually arranging the virtual objects results in power consumption and/or heat generation, thereby adversely impacting operability of the device.
In various implementations, when a user changes a collection of virtual objects from a first viewing arrangement to a second viewing arrangement, the electronic device arranges the virtual objects in the second viewing arrangement based on their arrangement in the first viewing arrangement. For example, virtual objects that are clustered in the first viewing arrangement may be clustered in the second viewing arrangement. Automatically arranging the virtual objects in the second viewing arrangement reduces a need for user inputs corresponding to the user manually arranging the virtual objects in the second viewing arrangement. Reducing unnecessary user inputs reduces utilization of computing resources associated with interpreting and acting upon unnecessary user inputs, thereby enhancing operability of the device by reducing power consumption and/or heat generation by the device.
In some implementations, the electronic device 102 includes a handheld computing device that can be held by the user 104. For example, in some implementations, the electronic device 102 includes a smartphone, a tablet, a media player, a laptop, or the like. In some implementations, the electronic device 102 includes a desktop computer. In some implementations, the electronic device 102 includes a wearable computing device that can be worn by the user 104. For example, in some implementations, the electronic device 102 includes a head-mountable device (HMD), an electronic watch or a pair of headphones. In some implementations, the electronic device 102 is a dedicated virtual assistant device that includes a speaker for playing audio and a microphone for receiving verbal commands. In some implementations, the electronic device 102 includes a television or a set-top box that outputs video data to a television.
In various implementations, the electronic device 102 includes (e.g., implements) a user interface engine that displays a user interface on a display 106. In some implementations, the display 106 is integrated in the electronic device 102. In some implementations, the display 106 is implemented as a separate device from the electronic device 102. For example, the display 106 may be implemented as an HMD that is in communication with the electronic device 102.
In some implementations, the user interface engine displays the user interface in an extended reality (XR) environment 108 on the display 106. The user interface may include one or more virtual objects 110a, 110b, 110c, 110d, 110e, 110f (collectively referred to as virtual objects 110) that are displayed in a first viewing arrangement in a region 112 of the XR environment 108. In some implementations, the first viewing arrangement is a bounded viewing arrangement. For example, the region 112 may include a two-dimensional virtual surface 114a enclosed by a boundary and a two-dimensional virtual surface 114b that is substantially parallel to the two-dimensional virtual surface 114a. The virtual objects 110 may be displayed on either of the two-dimensional virtual surfaces 114a, 114b. In some implementations, the virtual objects 110 may be displayed between the two-dimensional virtual surfaces 114a, 114b.
As shown in
In some implementations, the electronic device 102 obtains a user input corresponding to a change to a second viewing arrangement in a region 116 of the XR environment 108. The second viewing arrangement may be an unbounded viewing arrangement. For example, the region 116 may be associated with a physical element in the XR environment 108. In some implementations, the user input is a gesture input. For example, the electronic device 102 may detect a gesture directed to one or more of the virtual objects or to the region 112 and/or the region 116. In some implementations, the user input is an audio input. For example, the electronic device 102 may detect a voice command to change to the second viewing arrangement. In some implementations, the electronic device 102 may receive the user input from a user input device, such as a keyboard, mouse, stylus, and/or touch-sensitive display. In some implementations, the electronic device 102 obtains a confirmation input to confirm that the user 104 wishes to change to the second viewing arrangement. For example, the electronic device 102 may sense a head pose of the user 104 or a gesture performed by the user 104.
In some implementations, the electronic device 102 determines a mapping between the first spatial arrangement and a second spatial arrangement. The mapping may be based on spatial relationships between the virtual objects 110. For example, virtual objects that share a first spatial characteristic, such as the virtual objects 110a, 110b, and 110c, may be grouped together and separately from virtual objects that share a second spatial characteristic, such as the virtual objects 110d, 110e, and 110f.
Referring to
Referring to
In some implementations, the regions 112a, 112b are associated with different characteristics of the virtual objects 110. For example, the virtual objects 110g, 110h, 110i may be displayed in the region 112a because they are associated with a first application. As another example, the virtual objects 110g, 110h, 110i may represent content of a first media type. The virtual objects 110j, 110k, 110l may be displayed in the region 112b because they are associated with a second application and/or because they represent content of a second media type.
Referring to
In some implementations, a visual characteristic of one or more of the virtual objects 110 may be modified based on the viewing arrangement. For example, when a virtual object 110 is displayed in the first viewing arrangement, it may have a two-dimensional appearance. When the same virtual object 110 is displayed in the second viewing arrangement, it may have a three-dimensional appearance.
The user 104 may manipulate the virtual objects 110 in the second viewing arrangement. For example, the user 104 may use gestures and/or other inputs to move one or more of the virtual objects 110 in the second viewing arrangement. The user 104 may use a user input, such as a gesture input, an audio input, or a user input provided via a user input device, such as a keyboard, mouse, stylus, and/or touch-sensitive display, to return to the first viewing arrangement. In some implementations, when the virtual objects 110 are displayed in the first viewing arrangement, any virtual objects 110 that were moved in the second viewing arrangement are displayed in different positions (e.g., relative to their original positions) in the first viewing arrangement. In some implementations, when the virtual objects 110 are displayed in the first viewing arrangement, any virtual objects 110 that were not moved in the second viewing arrangement are displayed in their original positions (e.g., before changing to the second viewing arrangement) in the first viewing arrangement.
In some implementations, the user interface engine 200 includes a display 202. The display 202 displays a set of virtual objects in a first viewing arrangement in a first region of an extended reality (XR) environment, such as the XR environment 108 of
In the first viewing arrangement, the virtual objects are arranged in a first spatial arrangement. For example, the virtual objects may be displayed on any of the two-dimensional virtual surfaces. In some implementations, the virtual objects may be displayed between the two-dimensional virtual surfaces. Placement of the virtual objects may be determined by a user. In some implementations, placement of the virtual objects is determined programmatically, e.g., based on functional characteristics of the virtual objects. For example, placement of the virtual objects may be based on respective applications with which the virtual objects are associated. In some implementations, placement of the virtual objects is based on media types or file types of content with which the virtual objects are associated.
In some implementations, the virtual objects are displayed in groupings. For example, some virtual objects may share a first spatial characteristic of being within a threshold radius of a point. In some implementations, some virtual objects share a first spatial characteristic of being associated with a particular two-dimensional virtual surface or a particular space between two-dimensional virtual surfaces.
In some implementations, the user interface engine 200 obtains a user input 212 corresponding to a change to a second viewing arrangement in a second region of the XR environment. For example, the user interface engine 200 may receive the user input 212 from a user input device, such as a keyboard, mouse, stylus, and/or touch-sensitive display. In some implementations, the user input 212 includes an audio input received from an audio sensor, such as a microphone. For example, the user may provide a spoken command to change to the second viewing arrangement.
In some implementations, the user input 212 includes an image 214 received from the image sensor 204. The image 214 may be a still image or a video feed comprising a series of image frames. The image 214 may include a set of pixels representing an extremity of the user. The virtual object arranger 210 may perform image analysis on the image 214 to detect a gesture. For example, the virtual object arranger 210 may detect a gesture directed to one or more of the virtual objects or to a region in the XR environment.
In some implementations, the user input 212 includes a gaze vector received from a user-facing camera. For example, the virtual object arranger 210 may determine that a gaze of the user is directed to one or more of the virtual objects or to a region in the XR environment.
In some implementations, the virtual object arranger 210 obtains a confirmation input to confirm that the user wishes to change to the second viewing arrangement. For example, the virtual object arranger 210 may sense a head pose of the user or a gesture performed by the user. In some implementations, the confirmation input comprises a gaze vector that is maintained for at least a threshold duration.
In some implementations, the second viewing arrangement is an unbounded viewing arrangement. For example, in the second viewing arrangement, the virtual objects may be displayed in a region that is associated with a physical element in the XR environment. In the second viewing arrangement, the virtual objects are displayed in a second spatial arrangement. For example, some of the virtual objects may be displayed in clusters in the second spatial arrangement. The virtual object arranger 210 determines a mapping between the first spatial arrangement and the second spatial arrangement based on spatial relationships between the virtual objects. For example, virtual objects that share a first spatial characteristic may be grouped together and separately from virtual objects that share a second spatial characteristic. In some implementations, for example, virtual objects that are associated with a particular two-dimensional virtual surface in the first viewing arrangement may be displayed in a cluster in the second viewing arrangement.
In some implementations, the virtual object arranger 210 displays the set of virtual objects in the second viewing arrangement in the second region of the XR environment on the display 202. Spatial relationships between virtual objects may be preserved. For example, some virtual objects may be displayed in a cluster because they share a spatial characteristic when displayed in the first spatial arrangement in the first region of the XR environment. Within each cluster, the spatial relationships between the virtual objects may be preserved or changed.
In some implementations, the virtual object arranger 300 implements the virtual object arranger 210 shown in
While pertinent features are illustrated, those of ordinary skill in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity and so as not to obscure more pertinent aspects of the implementations disclosed herein. Those of ordinary skill in the art will also appreciate from the present disclosure that the functions and sub-functions implemented by the virtual object arranger 300 can be combined into one or more systems and/or further sub-divided into additional subsystems; and that the functionality described below is provided as merely one example configuration of the various aspects and functions described herein.
In some implementations, an object renderer 310 displays a set of virtual objects in a first viewing arrangement on the display 302 in a first region of an XR environment. The first viewing arrangement may be a bounded viewing arrangement and may include one or more sets of substantially parallel two-dimensional virtual surfaces that are enclosed by respective boundaries, such as the region 112 of
In some implementations, the object renderer 310 displays the virtual objects in groupings sharing spatial characteristics. For example, some virtual objects may share a spatial characteristic of being within a threshold radius of a point. In some implementations, some virtual objects share a spatial characteristic of being associated with a particular two-dimensional virtual surface or a particular space between two-dimensional virtual surfaces.
In some implementations, an input obtainer 320 obtains a user input 322 that corresponds to a change to a second viewing arrangement in a second region of the XR environment. For example, the input obtainer 320 may receive the user input 322 from a user input device, such as a keyboard, mouse, stylus, and/or touch-sensitive display. In some implementations, the user input 322 includes an audio input received from an audio sensor, such as a microphone. For example, the user may provide a spoken command to change to the second viewing arrangement.
In some implementations, the user input 322 includes an image 324 received from the image sensor 304. The image 324 may be a still image or a video feed comprising a series of image frames. The image 324 may include a set of pixels representing an extremity of the user. The input obtainer 320 may perform image analysis on the image 324 to detect a gesture. For example, the input obtainer 320 may detect a gesture directed to one or more of the virtual objects or to a region in the XR environment.
In some implementations, the user input 322 includes a gaze vector received from a user-facing image sensor. For example, the input obtainer 320 may determine that a gaze of the user is directed to one or more of the virtual objects or to a region in the XR environment.
In some implementations, the input obtainer 320 obtains a confirmation input to confirm that the user wishes to change to the second viewing arrangement. For example, the input obtainer 320 may use an accelerometer, gyroscope, and/or inertial measurement unit (IMU) to sense a head pose of the user. The input obtainer 320 may use the image sensor 304 to detect a gesture performed by the user. In some implementations, the confirmation input comprises a gaze vector that is maintained for at least a threshold duration.
In some implementations, the second viewing arrangement is an unbounded viewing arrangement in which the virtual objects are displayed in a region that may not be defined by a boundary. For example, in the second viewing arrangement, the virtual objects may be displayed in a region that is associated with a physical element in the XR environment. In the second viewing arrangement, the virtual objects are displayed in a second spatial arrangement. For example, some virtual objects may be displayed in clusters.
In some implementations, an object transposer 330 determines a mapping between the first spatial arrangement and the second spatial arrangement based on spatial relationships between the virtual objects. For example, virtual objects that share a first spatial characteristic may be grouped together. The virtual objects sharing the first spatial characteristic may be grouped separately from virtual objects that share a second spatial characteristic. For example, virtual objects that are associated with a first two-dimensional virtual surface in the first viewing arrangement may be displayed in a first cluster in the second viewing arrangement. Virtual objects that are associated with a second two-dimensional virtual surface in the first viewing arrangement may be displayed in a second cluster in the second viewing arrangement. The object transposer 330 may determine the distance between the first and second clusters based on, for example, the spatial relationship between the first and second two-dimensional virtual surfaces in the first viewing arrangement.
In some implementations, the object renderer 310 displays the set of virtual objects in the second viewing arrangement in the second region of the XR environment on the display 302. Spatial relationships between virtual objects may be preserved. For example, some virtual objects may be displayed in a cluster because they share a spatial characteristic when displayed in the first spatial arrangement in the first region of the XR environment. Within each cluster, the spatial relationships between the virtual objects may be preserved or changed. For example, the object transposer 330 may preserve the spatial relationships between the virtual objects to the extent possible while still displaying the virtual objects in the second region. In some implementations, the object transposer 330 arranges virtual objects to satisfy aesthetic criteria. For example, the object transposer 330 may arrange the virtual objects by shape and/or size. As another example, if the second region is associated with a physical element, the object transposer 330 may arrange the virtual objects based on the shape of the physical element.
In some implementations, the object renderer 310 resizes virtual objects to accommodate display constraints. For example, if the second region is associated with a physical element, the object renderer 310 may resize virtual objects to fit the physical element. In some implementations, the object renderer 310 resizes virtual objects to satisfy aesthetic criteria. For example, certain virtual objects may be resized to maintain proportionality with other virtual objects or with other features of the XR environment.
Referring to
In some implementations, as represented by block 410d, the method 400 includes displaying the set of virtual objects on at least one of the first two-dimensional virtual surface or the second two-dimensional virtual surface. The virtual objects may be displayed between the two-dimensional virtual surfaces. In some implementations, a user assigns respective placement locations for the virtual objects on or between the two-dimensional virtual surfaces, for example, using gesture inputs.
In some implementations, respective placement locations for the virtual objects are assigned programmatically. For example, in some implementations, as represented by block 410e, the set of virtual objects correspond to content items that have a first characteristic. In some implementations, as represented by block 410f, the set of virtual objects include a first subset of virtual objects that correspond to content items that have a first characteristic and a second subset of virtual objects that correspond to content items that have a second characteristic that is different from the first characteristic. As represented by block 410g, the first subset of virtual objects may be displayed in a first area of the first region, and the second subset of virtual objects may be displayed in a second area of the first region. For example, as illustrated in
In various implementations, as represented by block 420, the method 400 includes obtaining a user input that corresponds to a request to change to a second viewing arrangement in a second region of the XR environment. As represented by block 420a, the user input may include a gesture input. For example, the user input may include an image that is received from an image sensor. The image may be a still image or a video feed comprising a plurality of video frames. The image includes pixels that may represent various objects, including, for example, an extremity of the user. For example, the electronic device 102 shown in
In some implementations, as represented by block 420b, the user input includes an audio input. The audio input may be received from an audio sensor, such as a microphone. For example, the user may provide a spoken command to change to the second viewing arrangement.
In some implementations, as represented by block 420c, the method 400 includes receiving the user input from a user input device. For example, the user input may be received from a keyboard, mouse, stylus, and/or touch-sensitive display. As another example, a user-facing image sensor may provide data that may be used to determine a gaze vector. For example, the electronic device 102 shown in
In some implementations, as represented by block 420d, the method 400 includes obtaining a confirmation input before determining the mapping between the first spatial arrangement and a second spatial arrangement. For example, the electronic device 102 shown in
In some implementations, as represented by block 420e, the second viewing arrangement comprises an unbounded viewing arrangement. For example, the virtual objects may be displayed in a second region of the XR environment that may not be defined by a boundary. In some implementations, as represented by block 420f, the second region of the XR environment is associated with a physical element in the XR environment. For example, the second region may be associated with a physical table that is present in the XR environment. In some implementations, as represented by block 420g, the second region of the XR environment is associated with a surface of the physical element in the XR environment. For example, the second region may be associated with a tabletop of a physical table that is present in the XR environment.
In various implementations, as represented by block 430, the method 400 includes determining a mapping between the first spatial arrangement and a second spatial arrangement based on spatial relationships between the set of virtual objects. For example, virtual objects that share a first spatial characteristic may be grouped together. The virtual objects sharing the first spatial characteristic may be grouped separately from virtual objects that share a second spatial characteristic that is different from the first spatial characteristic. Automatically arranging the virtual objects in the second viewing arrangement reduces a need for user inputs corresponding to the user manually arranging the virtual objects in the second viewing arrangement. Reducing unnecessary user inputs reduces utilization of computing resources associated with interpreting and acting upon unnecessary user inputs, thereby enhancing operability of the device by reducing power consumption and/or heat generation by the device.
Referring to
As represented by block 430b, in some implementations, the method 400 includes determining a subset of virtual objects that have a first characteristic and displaying the subset of virtual objects as a cluster of virtual objects in the second spatial arrangement. For example, as represented by block 430c, the first characteristic may be a first media type. As another example, as represented by block 430d, the first characteristic may be an association with a first application. Virtual objects that represent content of the same media type or content that is associated with the same application may be clustered together in the second spatial arrangement.
As represented by block 430e, the first characteristic may be a spatial relationship in the first spatial arrangement. For example, virtual objects that are associated with a first two-dimensional virtual surface in the first viewing arrangement may be displayed in a first cluster in the second viewing arrangement. Such virtual objects may be grouped separately from virtual objects that share a second characteristic. For example, virtual objects that are associated with a second two-dimensional virtual surface in the first viewing arrangement may be displayed in a second cluster in the second viewing arrangement. The distance between the first and second clusters may be determined based on, for example, the spatial relationship between the first and second two-dimensional virtual surfaces in the first viewing arrangement.
In some implementations, as represented by block 430f, the spatial relationship is a distance from a point on the first region that satisfies a threshold. For example, some virtual objects may be within a threshold radius of a point (e.g., point P1 of
In some implementations, as represented by block 430g, the first characteristic is an association with a physical element. For example, virtual objects that are associated with a physical table that is present in the XR environment may be displayed as a cluster.
In various implementations, as represented by block 440, the method 400 includes displaying the set of virtual objects in the second viewing arrangement in the second region of the XR environment that is unbounded (e.g., not surrounded by and/or not enclosed within a visible boundary). Spatial relationships between the virtual objects may be preserved or changed. For example, the electronic device 102 shown in
In some implementations, the electronic device 102 resizes virtual objects to accommodate display constraints. For example, if the second region is associated with a physical element, the object renderer 310 may resize virtual objects to fit the physical element. In some implementations, the object renderer 310 resizes virtual objects to satisfy aesthetic criteria. For example, certain virtual objects may be resized to maintain proportionality with other virtual objects or with other features of the XR environment.
Virtual objects can be manipulated (e.g., moved) in the XR environment. In some implementations, as represented by block 440a, the method 400 includes obtaining an untethered user input that corresponds to a user selection of a particular virtual object. For example, the electronic device 102 shown in
In some implementations, as represented by block 440c, the method 400 includes obtaining a manipulation user input. The manipulation user input corresponds to a manipulation, e.g., a movement, of the virtual object that the user intends to be displayed. In some implementations, as represented by block 440d, the manipulation user input includes a gesture input. As represented by block 440e, in some implementations, the method 400 includes displaying a manipulation of the particular virtual object in the XR environment based on the manipulation user input. For example, the user may perform a drag and drop gesture in connection with a selected virtual object. The electronic device 102 shown in
In some implementations, the communication interface 508 is provided to, among other uses, establish, and maintain a metadata tunnel between a cloud-hosted network management system and at least one private network including one or more compliant devices. In some implementations, the one or more communication buses 504 include circuitry that interconnects and controls communications between system components. The memory 520 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices, and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. The memory 520 optionally includes one or more storage devices remotely located from the one or more CPUs 502. The memory 520 comprises a non-transitory computer readable storage medium.
In some implementations, the memory 520 or the non-transitory computer readable storage medium of the memory 520 stores the following programs, modules and data structures, or a subset thereof including an optional operating system 530, the object renderer 310, the input obtainer 320, and the object transposer 330. As described herein, the object renderer 310 may include instructions 310a and/or heuristics and metadata 310b for displaying a set of virtual objects in a viewing arrangement on a display in an XR environment. As described herein, the input obtainer 320 may include instructions 320a and/or heuristics and metadata 320b for obtaining a user input that corresponds to a change to a second viewing arrangement. As described herein, the object transposer 330 may include instructions 330a and/or heuristics and metadata 330b for determining a mapping between a first spatial arrangement and a second spatial arrangement based on spatial relationships between the virtual objects.
It will be appreciated that
While various aspects of implementations within the scope of the appended claims are described above, it should be apparent that the various features of implementations described above may be embodied in a wide variety of forms and that any specific structure and/or function described above is merely illustrative. Based on the present disclosure one skilled in the art should appreciate that an aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to or other than one or more of the aspects set forth herein.
It will also be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the claims. As used in the description of the implementations and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
This application is a continuation of Intl. Patent App. No. PCT/US2021/47985, filed on Aug. 27, 2021, which claims priority to U.S. Provisional Patent App. No. 63/081,987, filed on Sep. 23, 2020, which are incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
63081987 | Sep 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US21/47985 | Aug 2021 | US |
Child | 18123833 | US |