The present disclosure generally relates to selecting virtual objects.
Some devices are capable of generating and presenting graphical environments that include virtual objects and/or representations of physical elements. These environments may be presented on mobile communication devices.
So that the present disclosure can be understood by those of ordinary skill in the art, a more detailed description may be had by reference to aspects of some illustrative implementations, some of which are shown in the accompanying drawings.
In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method, or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures.
Various implementations disclosed herein include devices, systems, and methods for selecting multiple virtual objects within an extended reality (XR) environment. In some implementations, a method includes receiving a first gesture associated with a first virtual object in an extended reality (XR) environment. A movement of the first virtual object in the XR environment within a threshold distance of a second virtual object in the XR environment is detected. In response to detecting the movement of the first virtual object in the XR environment within the threshold distance of the second virtual object in the XR environment, a concurrent movement of the first virtual object and the second virtual object is displayed in the XR environment based on the first gesture.
In accordance with some implementations, a device includes one or more processors, a non-transitory memory, and one or more programs. In some implementations, the one or more programs are stored in the non-transitory memory and are executed by the one or more processors. In some implementations, the one or more programs include instructions for performing or causing performance of any of the methods described herein. In accordance with some implementations, a non-transitory computer readable storage medium has stored therein instructions that, when executed by one or more processors of a device, cause the device to perform or cause performance of any of the methods described herein. In accordance with some implementations, a device includes one or more processors, a non-transitory memory, and means for performing or causing performance of any of the methods described herein.
Numerous details are described in order to provide a thorough understanding of the example implementations shown in the drawings. However, the drawings merely show some example aspects of the present disclosure and are therefore not to be considered limiting. Those of ordinary skill in the art will appreciate that other effective aspects and/or variants do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices, and circuits have not been described in exhaustive detail so as not to obscure more pertinent aspects of the example implementations described herein.
A person can interact with and/or sense a physical environment or physical world without the aid of an electronic device. A physical environment can include physical features, such as a physical object or surface. An example of a physical environment is physical forest that includes physical plants and animals. A person can directly sense and/or interact with a physical environment through various means, such as hearing, sight, taste, touch, and smell. In contrast, a person can use an electronic device to interact with and/or sense an extended reality (XR) environment that is wholly or partially simulated. The XR environment can include mixed reality (MR) content, augmented reality (AR) content, virtual reality (VR) content, and/or the like. With an XR system, some of a person's physical motions, or representations thereof, can be tracked and, in response, characteristics of virtual objects simulated in the XR environment can be adjusted in a manner that complies with at least one law of physics. For instance, the XR system can detect the movement of a user's head and adjust graphical content and auditory content presented to the user similar to how such views and sounds would change in a physical environment. In another example, the XR system can detect movement of an electronic device that presents the XR environment (e.g., a mobile phone, tablet, laptop, or the like) and adjust graphical content and auditory content presented to the user similar to how such views and sounds would change in a physical environment. In some situations, the XR system can adjust characteristic(s) of graphical content in response to other inputs, such as a representation of a physical motion (e.g., a vocal command).
Many different types of electronic systems can enable a user to interact with and/or sense an XR environment. A non-exclusive list of examples include heads-up displays (HUDs), head mountable systems, projection-based systems, windows or vehicle windshields having integrated display capability, displays formed as lenses to be placed on users' eyes (e.g., contact lenses), headphones/earphones, input systems with or without haptic feedback (e.g., wearable or handheld controllers), speaker arrays, smartphones, tablets, and desktop/laptop computers. A head mountable system can have one or more speaker(s) and an opaque display. Other head mountable systems can be configured to accept an opaque external display (e.g., a smartphone). The head mountable system can include one or more image sensors to capture images/video of the physical environment and/or one or more microphones to capture audio of the physical environment. A head mountable system may have a transparent or translucent display, rather than an opaque display. The transparent or translucent display can have a medium through which light is directed to a user's eyes. The display may utilize various display technologies, such as uLEDs, OLEDs, LEDs, liquid crystal on silicon, laser scanning light source, digital light projection, or combinations thereof. An optical waveguide, an optical reflector, a hologram medium, an optical combiner, combinations thereof, or other similar technologies can be used for the medium. In some implementations, the transparent or translucent display can be selectively controlled to become opaque. Projection-based systems can utilize retinal projection technology that projects images onto users' retinas. Projection systems can also project virtual objects into the physical environment (e.g., as a hologram or onto a physical surface).
In some implementations, an electronic device comprises one or more processors working with non-transitory memory. In some implementations, the non-transitory memory stores one or more programs of executable instructions that are executed by the one or more processors. In some implementations, the executable instructions carry out the techniques and processes described herein. In some implementations, a computer (readable) storage medium has instructions that, when executed by one or more processors of an electronic device, cause the electronic device to perform, or cause performance, of any of the techniques and processes described herein. The computer (readable) storage medium is non-transitory. In some implementations, a device includes one or more processors, a non-transitory memory, and means for performing or causing performance of the techniques and processes described herein.
The present disclosure provides methods, systems, and/or devices for selecting multiple virtual objects within an extended reality (XR) environment. In various implementations, an electronic device, such as a smartphone, tablet, or laptop or desktop computer, displays virtual objects in an extended reality (XR) environment.
Selection of multiple virtual objects in an XR environment can be tedious due to the effort involved in manipulating multiple virtual objects with gestures. For example, a user may create a group of virtual objects by moving a first virtual object to an area, then moving a second virtual object to the same area. The user may repeat the process to add other virtual objects to the group. Using these gestures to organize virtual objects in the XR environment may involve large gestures performed by the user. Requiring a user to arrange the virtual objects by using a large gesture for each virtual object may increase the amount of effort the user expends to organize the virtual objects. Interpreting and acting upon user inputs that correspond to the user manually arranging the virtual objects results in power consumption and/or heat generation, thereby adversely impacting operability of the device.
In various implementations, a user can use a gesture to select a first virtual object and to initiate the selection of multiple virtual objects. The user can then use the first virtual object as a tool to select other virtual objects by passing over them. As the user passes over the other virtual objects, the virtual objects are moved together, e.g., as a group. When the user performs another gesture, the virtual objects are dropped together. The user can thus select and move multiple virtual objects using a simplified set of movements. For example, the user may avoid the need for separate gestures to select multiple virtual objects to add to a group of virtual objects. In some implementations, a single gesture may be used to create a group of virtual objects. Reducing unnecessary user inputs reduces utilization of computing resources associated with interpreting and acting upon unnecessary user inputs, thereby enhancing operability of the device by reducing power consumption and/or heat generation by the device.
In some implementations, the electronic device 102 includes a handheld computing device that can be held by the user 104. For example, in some implementations, the electronic device 102 includes a smartphone, a tablet, a media player, a laptop, or the like. In some implementations, the electronic device 102 includes a desktop computer. In some implementations, the electronic device 102 includes a wearable computing device that can be worn by the user 104. For example, in some implementations, the electronic device 102 includes a head-mountable device (HMD), an electronic watch or a pair of headphones. In some implementations, the electronic device 102 is a dedicated virtual assistant device that includes a speaker for playing audio and a microphone for receiving verbal commands. In some implementations, the electronic device 102 includes a television or a set-top box that outputs video data to a television.
In various implementations, the electronic device 102 includes (e.g., implements) a user interface engine that displays a user interface on a display 106. In some implementations, the display 106 is integrated in the electronic device 102. In some implementations, the display 106 is implemented as a separate device from the electronic device 102. For example, the display 106 may be implemented as an HMD that is in communication with the electronic device 102. In some implementations, the user interface engine displays the user interface in an extended reality (XR) environment 108 on the display 106. The user interface may include one or more virtual objects 110a, 110b, 110c (collectively referred to as virtual objects 110) that are displayed the XR environment 108.
As represented in
As represented in
In some implementations, as represented in
As represented in
While pertinent features are illustrated, those of ordinary skill in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity and so as not to obscure more pertinent aspects of the implementations disclosed herein. Those of ordinary skill in the art will also appreciate from the present disclosure that the functions and sub-functions implemented by the user interface engine 200 can be combined into one or more systems and/or further sub-divided into additional subsystems, and that the functionality described below is provided as merely one example configuration of the various aspects and functions described herein.
In some implementations, the user interface engine 200 includes a display 202. The display 202 displays one or more virtual objects, e.g., the virtual objects 110, in an XR environment, such as the XR environment 108 of
In some implementations, the virtual object renderer 210 displays a movement of the first virtual object in the XR environment. For example, the virtual object renderer 210 may display a movement of the first virtual object to follow a gesture (e.g., a dragging gesture) performed by the user. In some implementations, the virtual object renderer 210 detects a movement of the first virtual object within a threshold distance of a second virtual object in the XR environment. For example, the virtual object renderer 210 may determine that the user has dragged the first virtual object within the threshold distance of the second virtual object. In response to detecting the movement of the first virtual object within the threshold distance of the second virtual object, the virtual object renderer 210 may display a movement of the first virtual object and the second virtual object concurrently in the XR environment. In some implementations, the movement is concurrent and is based on the first gesture. For example, the displayed movement may follow a direction of the first gesture.
In some implementations, this displayed concurrent movement of virtual objects is applied to larger groups of virtual objects. For example, if the virtual object renderer 210 determines that the user has dragged the first virtual object near multiple virtual objects in succession, a group of virtual objects (e.g., the group object 122 of
In some implementations, if the virtual object renderer 210 receives a second gesture, the virtual object renderer 210 causes the display 202 to display the virtual objects at a location associated with the second gesture. For example, if the second gesture is a spreading of the user's fingers, the virtual objects may be displayed proximate a location in the XR environment at which the user's fingers were spread. In some implementations, the second gesture may follow a path. For example, the user may perform a finger spreading gesture while moving the hand in an arc. The virtual objects in the group may be displayed along or near the path. In some implementations, the virtual object renderer 210 may generate a group object. For example, the individual virtual objects may be replaced by the group object in the XR environment.
While pertinent features are illustrated, those of ordinary skill in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity and so as not to obscure more pertinent aspects of the implementations disclosed herein. Those of ordinary skill in the art will also appreciate from the present disclosure that the functions and sub-functions implemented by the virtual object renderer 300 can be combined into one or more systems and/or further sub-divided into additional subsystems; and that the functionality described below is provided as merely one example configuration of the various aspects and functions described herein.
In some implementations, the display 302 displays a user interface in an XR environment. The user interface may include one or more virtual objects that are displayed in the XR environment. In some implementations, an input obtainer 310 receives a first gesture that is associated with a first virtual object in the XR environment. For example, the image sensor 304 may receive an image. The image may be a still image or a video feed that comprises a series of image frames. The image may include a set of pixels representing an extremity of the user.
In some implementations, a gesture identifier 320 performs image analysis on the image to detect a first gesture performed by the user. The first gesture may include, for example, a pinching gesture that is performed near the first virtual object. The gesture identifier 320 may identify the virtual object (e.g., the first virtual object) to which the gesture is directed. In some implementations, the gesture identifier 320 identifies a motion associated with the gesture. For example, if the user performs the first gesture along a path in the physical environment, the gesture identifier 320 may identify the path and/or determine a corresponding path in the XR environment.
In some implementations, an object placement determiner 330 determines a placement location of the first virtual object based on the first gesture. For example, if the user performs the first gesture along a path in the physical environment, the object placement determiner 330 may determine that the first virtual object should follow the corresponding path in the XR environment. In some implementations, the object placement determiner 330 determines the path in the XR environment that corresponds to the path of the first gesture in the physical environment. In some implementations, the gesture identifier 320 determines the corresponding path in the XR environment.
The object placement determiner 330 may detect a movement of the first virtual object in the XR environment within a threshold distance of a second virtual object in the XR environment. For example, the object placement determiner 330 may store and/or access location information (e.g., coordinates) associated with virtual objects in the XR environment. If the location information associated with the first virtual object and the location information associated with the second virtual object indicate that the distance between the first virtual object and the second virtual object is less than the threshold distance, the object placement determiner 330 may determine that the first virtual object has moved within the threshold distance of the second virtual object.
In some implementations, when the object placement determiner 330 determines that the first virtual object has moved within the threshold distance of the second virtual object, the object placement determiner 330 associates the first virtual object and the second virtual object, e.g., creates a group comprising the first virtual object and the second virtual object.
In some implementations, a display module 340 causes the display 302 to display virtual objects (e.g., the first virtual object and the second virtual object) at the object placement locations determined by the object placement determiner 330. Virtual objects that are associated with one another by the object placement determiner 330 may be displayed as a group. For example, if the object placement determiner 330 detects that the first virtual object has moved within the threshold distance of the second virtual object, the display module 340 may display a movement of the first virtual object and the second virtual object concurrently in the XR environment. The movement may be based on the first gesture. For example, if the first gesture follows a path in the physical environment, the displayed movement may follow a corresponding path in the XR environment.
In some implementations, the display module 340 displays concurrent movement of larger groups of virtual objects. For example, the object placement determiner 330 may determine that the user has dragged the first virtual object near multiple virtual objects in succession, e.g., if the distance between the first virtual object and other virtual objects in the XR environment is less than the threshold distance at various times over the course of the movement of the first virtual object. The object placement determiner 330 may create a group of multiple virtual objects that includes the virtual objects to which the first virtual object was displayed within a threshold distance. In this way, virtual objects may be accumulated. The display module 340 may cause the display 302 to display concurrent movement of the virtual objects forming the group of virtual objects.
In some implementations, if the gesture identifier 320 detects a second gesture, the display module 340 causes the display 302 to display the virtual objects at a location associated with the second gesture. For example, if the second gesture is a spreading of the user's fingers, the virtual objects may be displayed proximate a location in the XR environment at which the user's fingers were spread. In some implementations, the second gesture may follow a path in the physical environment. For example, the user may perform a finger spreading gesture while moving the hand in an arc. The virtual objects in the group may be displayed along or near a corresponding path in the XR environment. In some implementations, the object placement determiner 330 may generate a group object that replaces the individual virtual objects in the XR environment.
In some implementations, a user interface including one or more virtual objects is displayed in an XR environment. A user may interact with a virtual object, e.g., using gestures, such as pinch and/or pull gestures, to manipulate the virtual object. Referring to
Referring to
In some implementations, as represented by block 410b, the first gesture is received via a second device. For example, a wearable device may include an accelerometer, gyroscope, and/or inertial measurement unit (IMU) that may provide information relating to movements of an extremity of the user. As another example, the electronic device 102 may be implemented as a head-mountable device (HMD), and the first gesture may be received from a smartphone or tablet that is in communication with the electronic device 102.
In some implementations, as represented by block 410c, a visual effect is displayed in association with the first virtual object in response to receiving the first gesture. For example, to confirm selection of the first virtual object, a shimmering or other visual effect may be displayed. As represented by block 410d, the visual effect may include a deformation of the first virtual object. The deformation may be physics-based and may be dependent on a type of object represented by the virtual object. For example, the displayed deformation may be similar to a deformation of a real-world counterpart to the virtual object.
Other modalities for confirming selection of the first virtual object may be implemented. For example, as represented by block 410e, an audio output may be generated in response to receiving the first gesture. The audio output may include a sound effect and/or a verbal confirmation that the first virtual object was selected. In some implementations, as represented by block 410f, a haptic output is generated in response to receiving the first gesture. The haptic output may be delivered through the electronic device 102 and/or through another device.
In various implementations, as represented by block 420, the method 400 includes detecting a movement of the group of virtual objects including the first virtual object within the XR environment in a first direction towards a second virtual object in the XR environment. For example, the electronic device 102 may store and/or access location information (e.g., coordinates) associated with virtual objects in the XR environment. If the location information associated with the first virtual object and the location information associated with the second virtual object indicate that the distance between the first virtual object and the second virtual object is less than the threshold distance, the electronic device 102 may determine that the first virtual object has moved within the threshold distance of the second virtual object.
In some implementations, as represented by block 420a, the method 400 includes displaying a movement of the second virtual object toward the first virtual object in response to detecting the movement of the group of virtual objects including the first virtual object within the threshold distance of the second virtual object in the XR environment in order to indicate that the second virtual object has been included in the group of virtual objects. In some implementations, movement of the group of virtual objects including the first virtual object and the second virtual object may be displayed. This movement may be in respective directions toward a point between the first virtual object and the second virtual object.
In some implementations, as represented by block 420b, an audio output is generated in response to detecting the movement of the group of virtual objects including the first virtual object in the XR environment within the threshold distance of the second virtual object in the XR environment in order to indicate that the second virtual object has been included in the group of virtual objects. The audio output may include a sound effect and/or a verbal confirmation that the first virtual object and the second virtual object are associated with one another and/or have been added to the group of virtual objects, for example. In some implementations, as represented by block 420c, a haptic output is generated in response to detecting the movement of the group of virtual objects including the first virtual object in the XR environment within the threshold distance of the second virtual object in the XR environment in order to indicate that the second virtual object has been included in the group of virtual objects. The haptic output may be delivered through the electronic device 102 and/or through another device.
As represented by block 430, in some implementations, the method 400 includes, in response to detecting the movement of the group of virtual objects including the first virtual object in the environment within a threshold distance of the second virtual object in the environment, selecting the second virtual object for inclusion in the group of virtual objects and displaying a movement of the group of virtual objects including the first virtual object and the second virtual object in the environment based on the first gesture in a second direction that is different from the first direction. For example, as shown in
Referring to
In some implementations, as represented by block 430e, the second gesture is associated with a path in the XR environment. For example, the user may trace a path in the physical environment while performing the second gesture. The path in the physical environment may correspond to a path in the XR environment. As represented by block 430f, the path may include a line segment in the XR environment. For example, the path in the physical environment may include a line segment that corresponds to a line segment in the XR environment. As represented by block 430g, the path may include an arc in the XR environment. For example, the path in the physical environment may include an arc that corresponds to an arc in the XR environment. In some implementations, the path may be a more complex shape, e.g., incorporating line segments and/or arcs. As represented by block 430h, the method 400 may include displaying the group of virtual objects including the first virtual object and the second virtual object along the path. For example, if the user traces a horizontal line in the physical environment while performing the second gesture, the group of virtual objects including the first virtual object and the second virtual object may be “dropped” along the corresponding horizontal line in the XR environment.
In some implementations, as represented by block 430i, the method 400 includes creating a group of virtual objects that includes the first virtual object and the second virtual object. For example, when the electronic device 102 detects movement of the first virtual object within the threshold distance of the second virtual object, the electronic device 102 may associate the first virtual object and the second virtual object with one another. As the first virtual object is moved around the XR environment, other virtual objects that the first virtual object moves near may be added to the group of virtual objects. In some implementations, concurrent movement of all of the virtual objects in the group is displayed. In some implementations, as represented by block 430j, a third virtual object representing the first virtual object and the second virtual object is displayed. The third virtual object may represent and/or replace all of the virtual objects in the group.
In some implementations, the second direction is towards a third virtual object in the environment. In some implementations, the method 400 includes, in response to detecting the movement of the group of virtual objects including the first virtual object and the second virtual object in the environment within the threshold distance of the third virtual object in the environment, selecting the third virtual object for inclusion in the group of virtual objects and displaying a movement of the group of virtual objects including the first virtual object, the second virtual object and the third virtual object in the environment based on the first gesture in a third direction that is different from the second direction.
In some implementations, the second direction is towards a portion of the environment that corresponds to a drop zone where the group of virtual objects is to be placed. In some implementations, the method 400 includes, in response to detecting the movement of the group of virtual objects including the first virtual object and the second virtual object into the drop zone, placing the group of virtual objects including the first virtual object and the second virtual object in the drop zone.
In some implementations, the communication interface 508 is provided to, among other uses, establish, and maintain a metadata tunnel between a cloud-hosted network management system and at least one private network including one or more compliant devices. In some implementations, the one or more communication buses 504 include circuitry that interconnects and controls communications between system components. The memory 520 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices, and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. The memory 520 optionally includes one or more storage devices remotely located from the one or more CPUs 502. The memory 520 comprises a non-transitory computer readable storage medium.
In some implementations, the memory 520 or the non-transitory computer readable storage medium of the memory 520 stores the following programs, modules and data structures, or a subset thereof including an optional operating system 530, the input obtainer 310, the gesture identifier 320, the object placement determiner 330, and the display module 340. As described herein, the input obtainer 310 may include instructions 310a and/or heuristics and metadata 310b for receiving a first gesture that is associated with a first virtual object in the XR environment. As described herein, the gesture identifier 320 may include instructions 320a and/or heuristics and metadata 320b for performing image analysis on the image to detect the first gesture performed by the user. As described herein, the object placement determiner 330 may include instructions 330a and/or heuristics and metadata 330b for determining a placement location of the first virtual object based on the first gesture. As described herein, the display module 340 may include instructions 340a and/or heuristics and metadata 340b for causing a display to display virtual objects at the object placement locations determined by the object placement determiner 330.
It will be appreciated that
It will be appreciated that the figures are intended as a functional description of the various features which may be present in a particular implementation as opposed to a structural schematic of the implementations described herein. As recognized by those of ordinary skill in the art, items shown separately could be combined and some items could be separated. For example, some functional blocks shown separately in the figures could be implemented as a single block, and the various functions of single functional blocks could be implemented by one or more functional blocks in various implementations. The actual number of blocks and the division of particular functions and how features are allocated among them will vary from one implementation to another and, in some implementations, depends in part on the particular combination of hardware, software, and/or firmware chosen for a particular implementation.
While various aspects of implementations within the scope of the appended claims are described above, it should be apparent that the various features of implementations described above may be embodied in a wide variety of forms and that any specific structure and/or function described above is merely illustrative. Based on the present disclosure one skilled in the art should appreciate that an aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to or other than one or more of the aspects set forth herein.
It will also be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the claims. As used in the description of the implementations and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
This application is a continuation of Intl. Patent App. No. PCT/US2021/47983, filed on Aug. 27, 2021, which claims priority to U.S. Provisional Patent App. No. 63/081,992, filed on Sep. 23, 2020, which are incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
63081992 | Sep 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US21/47983 | Aug 2021 | US |
Child | 18123841 | US |