This relates generally to computer graphics editors.
Extended reality (XR) environments provide two-dimensional and/or three-dimensional environments where at least some objects displayed for a user's viewing are generated by a computer. In some uses, a user may create or modify extended reality environments, such as by editing, generating, or otherwise manipulating extended reality objects using a content generation environment, such as a graphics editor or graphics editing interface. Editors that allow for intuitive editing of computer-generated objects are desirable.
Some embodiments described in this disclosure are directed to selecting and manipulating virtual objects in a content generation environment using a pointing device (e.g., a stylus). In some embodiments, the orientation of the pointing device in physical space determines the virtual object that is selected for input. In some embodiments, a selection input is received via the pointing device while the pointing device is pointed at a respective virtual object and depending on the type of selection input, either a first type of manipulation or a second type of manipulation is performed on the respective virtual object. In some embodiments, the respective virtual object changes visual characteristics based on whether the pointing device is pointed at the virtual object or whether the virtual object has been selected for input.
The full descriptions of the embodiments are provided in the Drawings and the Detailed Description, and it is understood that this Summary does not limit the scope of the disclosure in any way.
For a better understanding of the various described embodiments, reference should be made to the Detailed Description below, in conjunction with the following drawings in which like reference numerals often refer to corresponding parts throughout the figures.
device in accordance with some embodiments of the disclosure.
In the following description of embodiments, reference is made to the accompanying drawings which form a part hereof, and in which it is shown by way of illustration specific embodiments that are optionally practiced. It is to be understood that other embodiments are optionally used and structural changes are optionally made without departing from the scope of the disclosed embodiments. Further, although the following description uses terms “first,” “second,” etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a respective selection input could be referred to as a “first” or “second” selection input, without implying that the respective selection input has different characteristics based merely on the fact that the respective selection input is referred to as a “first” or “second” selection input. On the other hand, a selection input referred to as a “first” selection input and a selection input referred to as a “second” selection input are both selection input, but are not the same selection input, unless explicitly described as such.
The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.
A physical environment refers to a physical world that people can sense and/or interact with without aid of electronic devices. The physical environment may include physical features such as a physical surface or a physical object. For example, the physical environment corresponds to a physical park that includes physical trees, physical buildings, and physical people. People can directly sense and/or interact with the physical environment such as through sight, touch, hearing, taste, and smell. In contrast, an extended reality (XR) environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic device. For example, the XR environment may include augmented reality (AR) content, mixed reality (MR) content, virtual reality (VR) content, and/or the like. With an XR system, a subset of a person's physical motions, or representations thereof, are tracked, and, in response, one or more characteristics of one or more virtual objects simulated in the XR environment are adjusted in a manner that comports with at least one law of physics. As one example, the XR system may detect head movement and, in response, adjust graphical content and an acoustic field presented to the person in a manner similar to how such views and sounds would change in a physical environment. As another example, the XR system may detect movement of the electronic device presenting the XR environment (e.g., a mobile phone, a tablet, a laptop, or the like) and, in response, adjust graphical content and an acoustic field presented to the person in a manner similar to how such views and sounds would change in a physical environment. In some situations (e.g., for accessibility reasons), the XR system may adjust characteristic(s) of graphical content in the XR environment in response to representations of physical motions (e.g., vocal commands).
There are many different types of electronic systems that enable a person to sense and/or interact with various XR environments. Examples include head mountable systems, projection-based systems, heads-up displays (HUDs), vehicle windshields having integrated display capability, windows having integrated display capability, displays formed as lenses designed to be placed on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers. A head mountable system may have one or more speaker(s) and an integrated opaque display. Alternatively, a head mountable system may be configured to accept an external opaque display (e.g., a smartphone). The head mountable system may incorporate one or more imaging sensors to capture images or video of the physical environment, and/or one or more microphones to capture audio of the physical environment. Rather than an opaque display, a head mountable system may have a transparent or translucent display. The transparent or translucent display may have a medium through which light representative of images is directed to a person's eyes. The display may utilize digital light projection, OLEDs, LEDs, uLEDs, liquid crystal on silicon, laser scanning light source, or any combination of these technologies. The medium may be an optical waveguide, a hologram medium, an optical combiner, an optical reflector, or any combination thereof. In some implementations, the transparent or translucent display may be configured to become opaque selectively. Projection-based systems may employ retinal projection technology that projects graphical images onto a person's retina. Projection systems also may be configured to project virtual objects into the physical environment, for example, as a hologram or on a physical surface.
In some embodiments, XR content can be presented to the user via an XR file that includes data representing the XR content and/or data describing how the XR content is to be presented. In some embodiments, the XR file includes data representing one or more XR scenes and one or more triggers for presentation of the one or more XR scenes. For example, a XR scene may be anchored to a horizontal, planar surface, such that when a horizontal, planar surface is detected (e.g., in the field of view of one or more cameras), the XR scene can be presented. The XR file can also include data regarding one or more objects (e.g., virtual objects) associated with the XR scene, and/or associated triggers and actions involving the XR objects.
In order to simplify the generation of XR files and/or editing of computer-generated graphics generally, a computer graphics editor including a content generation environment (e.g., an authoring environment graphical user interface (GUI)) can be used. In some embodiments, a content generation environment is itself a XR environment (e.g., a two-dimensional and/or three-dimensional environment). For example, a content generation environment can include one or more virtual objects and one or more representations of real world objects. In some embodiments, the virtual objects are superimposed over a physical environment, or a representation thereof. In some embodiments, the physical environment is captured via one or more cameras of the electronic device and is actively displayed in the XR environment (e.g., via the display generation component). In some embodiments, the physical environment is (e.g., passively) provided by the electronic device, for example, if the display generation component includes a translucent or transparent element through which the user is able to see the physical environment.
In such a content generation environment, a user can create objects from scratch (including the appearance of the objects, behaviors/actions of the objects, and/or triggers for the behaviors/actions of the objects). Additionally or alternatively, objects can be created by other content creators and imported into the content generation environment, where the objects can be placed into a XR environment or scene. In some embodiments, objects generated in a content generation environment or entire environments can be exported to other environments or XR scenes (e.g., via generating a XR file and importing or opening the XR file in a computer graphics editor application or XR viewer application).
As will be described in further detail below, in some embodiments, the content generation environment can enable a user to perform one or more transformations or manipulations of an object, such as relocating (e.g., moving), rotating, resizing, etc. In some embodiments, a pointing device can be used to select the object of interest. In some embodiments, the type of manipulation to perform on the object of interest can depend on the type of selection input received while the pointing device is pointed at the object of interest. For example, if the device receives a first type of selection input, a first type of manipulation can be performed on the object, but if the device receives a second type of selection input, a second type of manipulation can be performed on the object. In some embodiments, the manipulation on the object is performed in accordance with a change in the pose of the hand of the user and/or the pointing device that is held in the hand of the user. In some embodiments, the visual characteristic of the object of interest can change to indicate that the pointing device is pointed at the object of interest and/or that the object of interest has been selected (e.g., selected for input, selected for manipulation, etc., as a result of receiving a selection input). As used herein, the “pose” of the hand and/or “pose” of the pointing device refers to the position and/or location of the hand or pointing device in space (e.g., which maps to a location in the content generation environment, as will be described in more detail below) and/or the orientation of the hand or pointing device (e.g., absolute orientation with respect to gravity or “down” or relative orientation with respect to other objects). Thus, a change in the pose of the hand can refer to a change in the position of the hand (e.g., a movement of the hand to a new location), a change in the orientation of the hand (e.g., a rotation of the hand), or a combination of both. Further details of object manipulation are described with respect to
Embodiments of electronic devices and user interfaces for such devices are described. In some embodiments, the device is a portable communications device, such as a laptop or tablet computer. In some embodiments, the device is a mobile telephone that also contains other functions, such as personal digital assistant (PDA) and/or music player functions. In some embodiments, the device is a wearable device, such as a watch, a head-mounted display, etc. It should also be understood that, in some embodiments, the device is not a portable communications device, but is a desktop computer or a television. In some embodiments, the portable and non-portable electronic devices may optionally include touch-sensitive surfaces (e.g., touch screen displays and/or touch pads). In some embodiments, the device does not include a touch-sensitive surface (e.g., a touch screen display and/or a touch pad), but rather is capable of outputting display information (such as the user interfaces of the disclosure) for display on an integrated or external display device, and capable of receiving input information from an integrated or external input device having one or more input mechanisms (such as one or more buttons, a mouse, a touch screen display, stylus, and/or a touch pad). In some embodiments, the device has a display, but is capable of receiving input information from a separate input device having one or more input mechanisms (such as one or more buttons, a mouse, a touch screen display, and/or a touch pad).
In the discussion that follows, an electronic device that is in communication with a display generation component and one or more input devices is described. It should be understood, that the electronic device optionally is in communication with one or more other physical user-interface devices, such as touch-sensitive surface, a physical keyboard, a mouse, a joystick, a hand tracking device, an eye tracking device, a stylus, etc. Further, as described above, it should be understood that the described electronic device, display and touch-sensitive surface are optionally distributed amongst two or more devices. Therefore, as used in this disclosure, information displayed on the electronic device or by the electronic device is optionally used to describe information outputted by the electronic device for display on a separate display device (touch-sensitive or not). Similarly, as used in this disclosure, input received on the electronic device (e.g., touch input received on a touch-sensitive surface of the electronic device, or touch input received on the surface of a stylus) is optionally used to describe input received on a separate input device, from which the electronic device receives input information.
The device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, a television channel browsing application, and/or a digital video player application. Additionally, the device may support an application for generating or editing content for computer generated graphics and/or XR environments (e.g., an application with a content generation environment).
The various applications that are executed on the device optionally use a common physical user-interface device, such as the touch-sensitive surface. One or more functions of the touch-sensitive surface as well as corresponding information displayed on the device are, optionally, adjusted and/or varied from one application to the next and/or within a respective application. In this way, a common physical architecture (such as the touch-sensitive surface) of the device optionally supports the variety of applications with user interfaces that are intuitive and transparent to the user.
Device 200 includes communication circuitry 202. Communication circuitry 202 optionally includes circuitry for communicating with electronic devices, networks, such as the Internet, intranets, a wired network and/or a wireless network, cellular networks and wireless local area networks (LANs). Communication circuitry 202 optionally includes circuitry for communicating using near-field communication (NFC) and/or short-range communication, such as Bluetooth®.
Processor(s) 204 include one or more general processors, one or more graphics processors, and/or one or more digital signal processors. In some embodiments, memory 206 a non-transitory computer-readable storage medium (e.g., flash memory, random access memory, or other volatile or non-volatile memory or storage) that stores computer-readable instructions configured to be executed by processor(s) 204 to perform the techniques, processes, and/or methods described below. In some embodiments, memory 206 can including more than one non-transitory computer-readable storage medium. A non-transitory computer-readable storage medium can be any medium (e.g., excluding a signal) that can tangibly contain or store computer-executable instructions for use by or in connection with the instruction execution system, apparatus, or device. In some embodiments, the storage medium is a transitory computer-readable storage medium. In some embodiments, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium can include, but is not limited to, magnetic, optical, and/or semiconductor storages. Examples of such storage include magnetic disks, optical discs based on CD, DVD, or Blu-ray technologies, as well as persistent solid-state memory such as flash, solid-state drives, and the like.
Device 200 includes display generation component(s) 224. In some embodiments, display generation component(s) 224 include a single display (e.g., a liquid-crystal display (LCD), organic light-emitting diode (OLED), or other types of display). In some embodiments, display generation component(s) 224 includes multiple displays. In some embodiments, display generation component(s) 224 can include a display with touch capability (e.g., a touch screen), a projector, a holographic projector, a retinal projector, etc. In some embodiments, device 200 includes touch-sensitive surface(s) 220 for receiving user inputs, such as tap inputs and swipe inputs or other gestures. In some embodiments, display generation component(s) 224 and touch-sensitive surface(s) 220 form touch-sensitive display(s) (e.g., a touch screen integrated with device 200 or external to device 200 that is in communication with device 200).
Device 200 optionally includes image sensor(s) 210. Image sensors(s) 210 optionally include one or more visible light image sensor, such as charged coupled device (CCD) sensors, and/or complementary metal-oxide-semiconductor (CMOS) sensors operable to obtain images of physical objects from the real-world environment. Image sensor(s) 210 also optionally include one or more infrared (IR) sensors, such as a passive or an active IR sensor, for detecting infrared light from the real-world environment. For example, an active IR sensor includes an IR emitter for emitting infrared light into the real-world environment. Image sensor(s) 210 also optionally include one or more cameras configured to capture movement of physical objects in the real-world environment. Image sensor(s) 210 also optionally include one or more depth sensors configured to detect the distance of physical objects from device 200. In some embodiments, information from one or more depth sensors can allow the device to identify and differentiate objects in the real-world environment from other objects in the real-world environment. In some embodiments, one or more depth sensors can allow the device to determine the texture and/or topography of objects in the real-world environment.
In some embodiments, device 200 uses CCD sensors, event cameras, and depth sensors in combination to detect the physical environment around device 200. In some embodiments, image sensor(s) 220 include a first image sensor and a second image sensor. The first image sensor and the second image sensor work in tandem and are optionally configured to capture different information of physical objects in the real-world environment. In some embodiments, the first image sensor is a visible light image sensor and the second image sensor is a depth sensor. In some embodiments, device 200 uses image sensor(s) 210 to detect the position and orientation of device 200 and/or display generation component(s) 224 in the real-world environment. For example, device 200 uses image sensor(s) 210 to track the position and orientation of display generation component(s) 224 relative to one or more fixed objects in the real-world environment.
In some embodiments, device 200 includes microphones(s) 218 or other audio sensors. Device 200 uses microphone(s) 218 to detect sound from the user and/or the real-world environment of the user. In some embodiments, microphone(s) 218 includes an array of microphones (a plurality of microphones) that optionally operate in tandem, such as to identify ambient noise or to locate the source of sound in space of the real-world environment.
Device 200 includes location sensor(s) 214 for detecting a location of device 200 and/or display generation component(s) 224. For example, location sensor(s) 214 can include a GPS receiver that receives data from one or more satellites and allows device 200 to determine the device's absolute position in the physical world.
Device 200 includes orientation sensor(s) 216 for detecting orientation and/or movement of device 200 and/or display generation component(s) 224. For example, device 200 uses orientation sensor(s) 216 to track changes in the position and/or orientation of device 200 and/or display generation component(s) 224, such as with respect to physical objects in the real-world environment. Orientation sensor(s) 216 optionally include one or more gyroscopes and/or one or more accelerometers.
Device 200 includes hand tracking sensor(s) 230 and/or eye tracking sensor(s) 232, in some embodiments. Hand tracking sensor(s) 230 are configured to track the position/location of one or more portions of the user's hands, and/or motions of one or more portions of the user's hands with respect to the extended reality environment, relative to the display generation component(s) 224, and/or relative to another defined coordinate system. Eye tracking senor(s) 232 are configured to track the position and movement of a user's gaze (eyes, face, or head, more generally) with respect to the real-world or extended reality environment and/or relative to the display generation component(s) 224. In some embodiments, hand tracking sensor(s) 230 and/or eye tracking sensor(s) 232 are implemented together with the display generation component(s) 224. In some embodiments, the hand tracking sensor(s) 230 and/or eye tracking sensor(s) 232 are implemented separate from the display generation component(s) 224.
In some embodiments, the hand tracking sensor(s) 230 can use image sensor(s) 210 (e.g., one or more IR cameras, 3D cameras, depth cameras, etc.) that capture three-dimensional information from the real-world including one or more hands (e.g., of a human user). In some examples, the hands can be resolved with sufficient resolution to distinguish fingers and their respective positions. In some embodiments, one or more image sensor(s) 210 are positioned relative to the user to define a field of view of the image sensor(s) and an interaction space in which finger/hand position, orientation and/or movement captured by the image sensors are used as inputs (e.g., to distinguish from a user's resting hand or other hands of other persons in the real-world environment). Tracking the fingers/hands for input (e.g., gestures) can be advantageous in that it does not require the user to touch, hold or wear any sort of beacon, sensor, or other marker.
In some embodiments, eye tracking sensor(s) 232 includes at least one eye tracking camera (e.g., infrared (IR) cameras) and/or illumination sources (e.g., IR light sources, such as LEDs) that emit light towards a user's eyes. The eye tracking cameras may be pointed towards a user's eyes to receive reflected IR light from the light sources directly or indirectly from the eyes. In some embodiments, both eyes are tracked separately by respective eye tracking cameras and illumination sources, and a focus/gaze can be determined from tracking both eyes. In some embodiments, one eye (e.g., a dominant eye) is tracked by a respective eye tracking camera/illumination source(s).
Device 200 is not limited to the components and configuration of
The examples described below provide ways in which an electronic device manipulates an object based on the type of selection input received via a pointing device (e.g., stylus) that is held by a hand of the user. Efficient user interactions improve the speed and accuracy of generating content, thereby improving the creation of XR environments and XR objects (e.g., XR objects). Efficient user interfaces also enhance the user's interactions with the electronic device by reducing the difficulties in the object creation process. Enhancing interactions with a device reduces the amount of time needed by a user to perform operations, and thus reduces the power usage of the device and increases battery life for battery-powered devices. When a person uses a device, that person is optionally referred to as a user of the device.
In
As shown in
In some embodiments, stylus 308 is a passive pointing device, such as a pointing stick, pencil, pen, etc., or any suitable implement that is capable of indicating interest in an object via the orientation of the implement. In some embodiments, stylus 308 is an active pointing device that optionally includes one or more sensors and communication circuitry that is capable of communicating with the electronic device (e.g., wirelessly, via a one-way communication channel, or a two-way communication channel). In some embodiments, stylus 308 is paired with the electronic device.
The description herein describes embodiments in which a physical object (or representation thereof, such as hand 306 and/or stylus 308) interacts with one or more virtual objects in a content generation environment. As a mixed reality system, the device is optionally able to selectively display portions and/or objects of the physical environment such that the respective portions and/or objects of the physical environment appear as if they exist in the content generation environment displayed by the electronic device. Similarly, the device is optionally able to display virtual objects in the content generation environment to appear as if the virtual objects exist in the real world (e.g., physical environment) by placing the virtual objects at respective locations in the content generation environment that have corresponding locations in the real world. For example, the device optionally displays a virtual cube (e.g., such as cube 304) such that it appears as if a real cube is placed on top of a table in the physical environment. In some embodiments, each location in the content generation environment has a corresponding location in the physical environment. Thus, when the device is described as displaying a virtual object at a respective location with respect to a physical object (e.g., such as a location at or near the hand of the user, or at or near a physical table), the device displays the virtual object at a particular location in the content generation environment such that it appears as if the virtual object is at or near the physical object in the physical world (e.g., the virtual object is displayed at a location in the content generation environment that corresponds to a location in the physical environment at which the virtual object would be displayed if it were a real object at that particular location).
Similarly, a user is optionally able to interact with virtual objects in the content generation environment using one or more hands as if the virtual objects were real objects in the physical environment. Thus, in some embodiments, the hands of the user are displayed at a respective location in the content generation environment and are treated as if they were objects in the content generation environment that are able to interact with the virtual objects in the content generation environment as if they were real physical objects in the physical environment. In some embodiments, a user is able to move his or her hands to cause the representations of the hands in the content generation environment to move in conjunction with the movement of the user's hand.
In some embodiments described below, the device is optionally able to determine whether a pointing device (e.g., stylus 308) is directed to and/or pointed at a virtual object in a content generation environment. In some embodiments, because stylus 308 is a physical object in the physical world and virtual objects exist only in the content generation environment, to determine where and what the pointing device is pointed, the device performs a process of mapping interactions with the physical world to interactions with the content generation environment. For example, the device is able to determine, based on the orientation of the pointing device, that the pointing device is pointed at and/or oriented towards a particular location in the physical environment. Based on this determination, the device is able to map the location in the physical environment at which the pointing device is pointed to its corresponding location in the content generation environment and determine that the pointing device is pointed at and/or oriented towards the determined location in the content generation environment. For example, if the device determines that stylus 308 is pointed at a location in the center of the surface of table 302 in the physical environment, the device is able to determine that stylus 308 is pointed at cube 304 because in content generation environment 300, cube 304 is located at the center of the surface of table 302 (e.g., cube 304 is at the corresponding location in content generation environment 300 at which stylus 308 is pointed). In some embodiments, the representation of hand 306 and the representation of stylus 308 is displayed such that when stylus 308 is determined to be pointing at a respective virtual object, the representation of stylus 308 in content generation environment 300 appears as if it is pointed at the respective virtual object.
Returning to
In some embodiments, the orientation of stylus 308 is determined using one or more cameras and/or sensors (e.g., depth sensors and/or time-of-flight sensors) of the electronic device. For example, the one or more cameras or sensors of the electronic device can “see” stylus 308 and extrapolate using the orientation of stylus 308 to determine the location and object at which stylus 308 is pointing. In some embodiments, the orientation of stylus 308 is determined using one or more sensors integrated into stylus 308. For example, stylus 308 can include an accelerometer, a gyroscope, and/or any other suitable position and/or orientation sensor. In some embodiments, using information from the sensors in stylus 308 and/or using information from the sensors of the electronic device, the electronic device is able to determine the orientation of stylus 308 (e.g., the direction at which stylus 308 is pointed).
In
In
In some embodiments, in response to detecting that hand 306 performed “Gesture A” while stylus 308 is directed to cube 304, cube 304 is selected and visually modified to include highlighting 312 (e.g., optionally removing highlighting 310, or transforming highlighting 310 into highlighting 312), as shown in
As shown in
In some embodiments, cube 304 can be de-selected via one or more de-selection mechanisms. For example, in response to another object being selected, cube 304 is de-selected, as described above. In another example, cube 304 can be de-selected in response to a selection of a de-selection affordance, in response to receiving a selection input while stylus 308 is not directed to any object in content generation environment 300, or in response to receiving a second selection input of the first type while stylus 308 is directed to cube 304. In some embodiments, in response to cube 304 being de-selected, cube 304 is visually modified to return cube 304 to its original visual characteristic before cube 304 was selected (e.g., removing highlighting 312). In some embodiments, if stylus 308 remains directed to cube 304 when cube 304 is de-selected, cube 304 optionally is displayed with highlighting 310 (e.g., highlighting 312 is removed, or highlighting 312 is transformed into highlighting 310).
Thus, as described above, based on the orientation of stylus 308, a virtual object (or a physical object or representation thereof) can be highlighted to indicate that stylus 308 is pointed at the virtual object (e.g., and to indicate that user inputs may be directed to the virtual object). For example, if the orientation of stylus 308 is such that stylus 308 is pointed at a first virtual object, then the first virtual object is visually modified to be displayed with highlighting 310, but if the orientation of stylus 308 is such that stylus 308 is pointed at a second virtual object (or a physical object or representation thereof), then the second virtual object is visually modified to be displayed with highlighting 310. In some embodiments, if one virtual object is directly behind another such that stylus 308 is pointed towards both virtual objects, the virtual object that is closer to stylus 308 (e.g., the virtual object which has an unobscured path from stylus 308) is visually modified to be displayed with highlighting 310. In some embodiments, while a virtual object is displayed with highlighting 310 (e.g., while stylus 308 is directed to the virtual object), in response to receiving an input of a first type (e.g., a selection input, such as Gesture A), the virtual object is selected and displayed with highlighting 312.
It is understood that stylus 308 need not necessarily remain pointing at a respective virtual object during the entirety of the selection input. For example, receiving the tap input on stylus 308 can cause stylus 308 to temporarily change orientations such that stylus 308 temporarily points away from the respective virtual object. Thus, in some embodiments, the device can implement a buffer and/or lag time (e.g., a de-bouncing process) such that if stylus 308 is directed to the virtual object within a threshold amount of time before the selection input was received (e.g., 0.1 seconds, 0.5 seconds, 1 second, etc.) and/or stylus 308 moved by less than a threshold amount when the selection input was received (e.g., moved by less than 1 mm, 3 mm, 5 mm, 10 mm, etc.), then the device interprets the selection input as being directed to the virtual object (e.g., as if stylus 308 were pointed at the virtual object when the selection input was received).
It is understood that the above-described method of selecting an object can be performed on virtual objects in content generation environment 300 or real world objects (or representations thereof) that are displayed in content generation environment 300.
Content generation environment 400 includes table 402, cube 404, and representations of hand 406 and stylus 408 (e.g., similar to table 302, cube 304, hand 306, and stylus 308, respectively, described above with respect to
In
In some embodiments, in response to detecting that hand 406 performed “Gesture B” while stylus 408 is directed to cube 404, cube 404 is selected for movement operations and visually modified to include highlighting 412 (e.g., optionally removing highlighting 410, or transforming highlighting 410 into highlighting 412, via a process similar to described above with respect to
In some embodiments, highlighting 412 is similar to highlighting 312 described above with respect to
In some embodiments, in addition or alternatively to displaying cube 404 with highlighting 412, in response to detecting that hand 406 performed “Gesture B” while stylus 408 is directed to cube 404, the device displays movement indicator 414, as shown in
In some embodiments, while hand 406 maintains “Gesture B” (e.g., while cube 404 is selected for movement), cube 404 moves in accordance with a movement of hand 406 (and/or movement of stylus 408). For example, in
In some embodiments, while hand 406 maintains Gesture B (e.g., maintains contact with stylus 408 by the finger that performed the tap-and-hold input), cube 404 continues to move in accordance with the movement of hand 406 (and/or stylus 408). In some embodiments, in response to detecting that the contact with stylus 408 by the finger that performed the tap-and-hold gesture has terminated (e.g., termination of Gesture B), the movement operation for cube 404 is terminated and movements of hand 406 (and/or stylus 408) do not cause cube 404 to move in content generation environment 400. In some embodiments, in response to detecting that hand 408 has terminated Gesture B, cube 404 is no longer selected for movement, no longer displayed with highlighting 412, and is optionally reverted to its original visual characteristic (e.g., the visual characteristic that it had before it was modified to be displayed with highlighting 410 and highlighting 412). In some embodiments, if stylus 408 remains pointed at cube 404 when hand 408 terminated Gesture B, then cube 404 is visually modified to include highlighting 410 (e.g., optionally removing highlighting 412 or transforming highlighting 412 into highlighting 410).
Thus, as described above, because the selection input was a second type of selection input, cube 404 is selected for movement operations. In some embodiments, when cube 404 is selected for movement operations, other types of manipulations are not available. For example, while cube 404 is selected for movement operations, rotating hand 406 (and/or stylus 408) does not cause cube 404 to rotate. Similarly, cube 404 optionally cannot be resized while cube 404 is selected for movement operations. Thus, cube 404 is locked into movement operations because the selection input was the second type of selection input.
In some embodiments, cube 404 can be selected for movement operations even if cube 404 is currently selected for input (e.g., in response to a first type of selection input, such as in
Content generation environment 500 includes table 502, cube 504, and representations of hand 506 and stylus 508 (e.g., similar to table 302, cube 304, hand 306, and stylus 308, respectively, described above with respect to
In
In some embodiments, in response to detecting that hand 506 performed “Gesture C” while stylus 508 is directed to cube 504, cube 504 is selected for rotation and visually modified to include highlighting 512 (e.g., optionally removing highlighting 510, or transforming highlighting 510 into highlighting 512, via a process similar to described above with respect to
In some embodiments, highlighting 512 is similar to highlighting 312 described above with respect to
In some embodiments, in addition or alternatively to displaying cube 504 with highlighting 512, in response to detecting that hand 506 performed “Gesture C” while stylus 508 is directed to cube 504, the device displays rotation indicator 514, as shown in
In some embodiments, while hand 506 maintains “Gesture C” (e.g., while cube 504 is selected for rotation), cube 504 rotates in accordance with a rotation of hand 506 (and/or rotation of stylus 408). For example, in
In some embodiments, the rotation of cube 504 locks into one rotation orientation. For example, if the rotation of hand 506 is primarily a first type of rotation (e.g., a roll, yaw, or pitch rotation), then cube 504 locks into that type of rotation and does not rotate in other orientations, even if hand 506 also rotates in the other orientations. For example, if the roll component of the rotation of hand 506 has a greater magnitude than the yaw component of the rotation of hand 506 (e.g., and there was no pitch rotation component), then the roll rotation is the primary type of rotation and cube 504 locks into the roll rotation (e.g., only rotates in the roll orientation) in accordance with the roll component of the rotation of hand 506, even if hand 506 also includes yaw and/or pitch rotation components.
In some embodiments, cube 504 does not lock into a particular rotation orientation and is able to rotate in any orientation based on the rotation of hand 506. For example, if the rotation of hand 506 includes roll, yaw, and pitch rotation components, cube 504 rotates in the roll orientation in accordance with the roll component of the rotation of hand 506, in the yaw orientation in accordance with the yaw component of the rotation of hand 506, and in the pitch orientation in accordance with the pitch component of the rotation of hand 506.
In some embodiments, while hand 506 maintains Gesture C (e.g., maintains the contact of stylus 508 by the finger that performed the double tap-and-hold input), cube 504 continues to rotate in accordance with the rotation of hand 506 (and/or stylus 508). In some embodiments, in response to detecting that the contact with stylus 508 by the finger that performed the double tap-and-hold gesture has terminated (e.g., termination of Gesture C), the rotation operation for cube 504 is terminated and rotations of hand 506 (and/or stylus 508) do not cause cube 504 to rotate in content generation environment 500. In some embodiments, in response to detecting that hand 508 has terminated Gesture C, cube 504 is no longer selected for rotation, no longer displayed with highlighting 512 and is optionally reverted to its original visual characteristic (e.g., the visual characteristic that it had before it was modified to be displayed with highlighting 510 and highlighting 512). In some embodiments, if stylus 508 remains pointed at cube 504 when hand 508 terminated Gesture C, then cube 504 is visually modified to include highlighting 510 (e.g., optionally removing highlighting 512 or transforming highlighting 512 into highlighting 510).
Thus, as described above, because the selection input was a third type of selection input, cube 504 is selected for rotation operations. In some embodiments, when cube 504 is selected for rotation operations, other types of manipulations are not available. For example, while cube 504 is selected for rotation operations, moving hand 506 (and/or stylus 508) does not cause cube 504 to move. Similarly, cube 504 optionally cannot be resized while cube 504 is selected for rotation operations. Thus, cube 504 is locked into rotation operations because the selection input was the third type of selection input.
In some embodiments, cube 504 can be selected for rotation operations even if cube 504 is currently selected (e.g., in response to a first type of selection input, such as in
In some embodiments, an electronic device (e.g., a mobile device (e.g., a tablet, a smartphone, a media player, or a wearable device), a computer, etc. such as device 100 and/or device 200) in communication with a display generation component (e.g., a display integrated with the electronic device (optionally a touch screen display) and/or an external display such as a monitor, projector, television, head-mounted display, etc.) and a pointing device (e.g., a stylus, a pencil, a pen, a pointer, etc.) displays (602), via the display generation component, a three-dimensional environment, including a first virtual object at a first location in the three-dimensional environment, such as cube 304 in content generation environment 300 in
In some embodiments, while displaying the three-dimensional environment, the electronic device receives (604), via the pointing device, a sequence of user inputs including a selection input directed to the first virtual object and a change in a pose of the pointing device. For example, in
In some embodiments, in accordance with a determination that the selection input is a first type of selection input (608) (e.g., optionally in response to receiving the sequence of user inputs), the electronic device performs (610) a first type of manipulation on the first virtual object in accordance with the change in the pose of the pointing device. For example, in
In some embodiments, in accordance with a determination that the selection input is a second type of selection input (e.g., optionally in response to receiving the sequence of user inputs), different from the first type of selection input (612), the electronic device performs (614) a second type of manipulation on the first virtual object, different from the first type of manipulation, in accordance with the change in the pose of the pointing device. For example, in
Additionally or alternatively, in some embodiments, performing the first type of manipulation includes moving the first virtual object in accordance with a lateral movement of the pointing device, such as in
Additionally or alternatively, in some embodiments, the first type of selection input includes a single tap input with a respective finger followed by a continued contact by the respective finger with the pointing device, and the second type of selection input includes a double tap input with the respective finger followed by the continued contact by the respective finger with the pointing device.
Additionally or alternatively, in some embodiments, the change in the pose of the pointing device includes a first type of change in the pose of the pointing device and a second type of change in the pose of the pointing device. Additionally or alternatively, in some embodiments, performing the first type of manipulation includes performing the first type of manipulation on the first virtual object in accordance with the first type of change in the pose of the pointing device, without regard to the second type of change in the pose of the pointing device. Additionally or alternatively, in some embodiments, performing the second type of manipulation includes performing the second type of manipulation on the first virtual object in accordance with the second type of change in the pose of the pointing device, without regard to the first type of change in the pose of the pointing device. For example, if the selection input is the first type of selection input, the manipulation of the object locks into the first type of manipulation and the second type of manipulation is not performed and conversely if the selection input is the second type of selection input, the manipulation of the object locks into the second type of manipulation and the first type of manipulation is not performed.
Additionally or alternatively, in some embodiments, receiving the sequence of user inputs includes detecting the change in the pose of the pointing device via one or more sensors of the pointing device. Additionally or alternatively, in some embodiments, receiving the sequence of user inputs includes detecting the change in the pose of the pointing device via one or more sensors of the electronic device. In some embodiments, a change in the pose of the pointing device (and/or hand) includes a change in the position (e.g., absolute position in physical space or relative position with respect to other objects) and/or a change in the orientation (e.g., a rotation of the pointing device and/or hand).
Additionally or alternatively, in some embodiments, receiving the selection input directed to the first virtual object includes detecting that an orientation of the pointing device is directed to the first virtual object when the selection input was received, such as stylus 408 being pointed at cube 404 when Gesture B was performed in
Additionally or alternatively, in some embodiments, the three-dimensional environment includes a second virtual object at a second location in the three-dimensional environment, different from the first location. Additionally or alternatively, in some embodiments, while displaying the three-dimensional environment, the electronic device receives, via the pointing device, a second sequence of user inputs including a second selection input directed to the second virtual object and a second change in a pose of the pointing device. Additionally or alternatively, in some embodiments, in accordance with a determination that the second selection input is the first type of selection input (e.g., optionally in response to receiving the second sequence of user inputs), the electronic device performs the first type of manipulation on the second virtual object in accordance with the second change in the pose of the pointing device. Additionally or alternatively, in some embodiments, in accordance with a determination that the second selection input is the second type of selection input (e.g., optionally in response to receiving the second sequence of user inputs), the electronic device performs the second type of manipulation on the second virtual object, in accordance with the second change in the pose of the pointing device.
Additionally or alternatively, in some embodiments, while displaying the three-dimensional environment, the electronic device detects that an orientation of the pointing device is directed to the first virtual object. Additionally or alternatively, in some embodiments, in response to detecting that the orientation of the pointing device is directed to the first virtual object, the electronic device displays the first virtual object with a first visual characteristic, wherein before detecting that the orientation of the pointing device is directed to the first virtual object, the first virtual object has a second visual characteristic, different from the first visual characteristic. Additionally or alternatively, in some embodiments, in response to receiving the selection input directed to the first virtual object, displaying the first virtual object with a third visual characteristic, different from the first and the second characteristic. For example, in
Additionally or alternatively, in some embodiments, in accordance with the determination that the selection input is the first type of selection input (e.g., optionally in response to receiving the sequence of user inputs), the electronic device displays, via the display generation component, a first manipulation indication associated with the first type of manipulation, such as movement indicator 414 in
Additionally or alternatively, in some embodiments, the three-dimensional environment includes a representation of the pointing device, wherein the representation of the pointing device has an orientation based on an orientation of the pointing device, such as stylus 308 in
It should be understood that the particular order in which the operations in
The operations in the information processing methods described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general-purpose processors (e.g., as described with respect to
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best use the invention and various described embodiments with various modifications as are suited to the particular use contemplated.
This application claims the benefit of U.S. Provisional Application No. 63/083,022, filed Sep. 24, 2020, the content of which is incorporated herein by reference in its entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
5015188 | Pellosie et al. | May 1991 | A |
5809267 | Moran et al. | Sep 1998 | A |
6167433 | Maples et al. | Dec 2000 | A |
6295069 | Shirur | Sep 2001 | B1 |
6426745 | Isaacs et al. | Jul 2002 | B1 |
6750873 | Bernardini et al. | Jun 2004 | B1 |
7298370 | Middler et al. | Nov 2007 | B1 |
9294757 | Lewis et al. | Mar 2016 | B1 |
9298334 | Zimmerman et al. | Mar 2016 | B1 |
9383189 | Bridges et al. | Jul 2016 | B2 |
9396580 | Nowrouzezahrai et al. | Jul 2016 | B1 |
9519371 | Nishida | Dec 2016 | B2 |
10488941 | Lam et al. | Nov 2019 | B2 |
10642368 | Chen | May 2020 | B2 |
10664043 | Ikuta et al. | May 2020 | B2 |
10671241 | Jia et al. | Jun 2020 | B1 |
10691216 | Geisner et al. | Jun 2020 | B2 |
10762716 | Paul et al. | Sep 2020 | B1 |
10846864 | Kim et al. | Nov 2020 | B2 |
11138798 | Paul et al. | Oct 2021 | B2 |
11204678 | Baker | Dec 2021 | B1 |
11249556 | Schwarz et al. | Feb 2022 | B1 |
11262885 | Burckel | Mar 2022 | B1 |
11340756 | Faulkner et al. | May 2022 | B2 |
11347319 | Goel et al. | May 2022 | B2 |
11409363 | Chen et al. | Aug 2022 | B2 |
11416080 | Heo et al. | Aug 2022 | B2 |
11531402 | Stolzenberg | Dec 2022 | B1 |
11531459 | Poupyrev et al. | Dec 2022 | B2 |
11557102 | Palangie et al. | Jan 2023 | B2 |
11615596 | Faulkner et al. | Mar 2023 | B2 |
11641460 | Geusz et al. | May 2023 | B1 |
11669155 | Bowman et al. | Jun 2023 | B2 |
11762473 | Cipoletta et al. | Sep 2023 | B2 |
11768544 | Schwarz et al. | Sep 2023 | B2 |
11847748 | Liu et al. | Dec 2023 | B2 |
11886643 | Irie et al. | Jan 2024 | B2 |
11899845 | Chung et al. | Feb 2024 | B2 |
11909453 | Javaudin et al. | Feb 2024 | B2 |
11914759 | Klein et al. | Feb 2024 | B2 |
11928263 | Jung et al. | Mar 2024 | B2 |
11983326 | Lacey | May 2024 | B2 |
11989965 | Tarighat Mehrabani | May 2024 | B2 |
12032803 | Pastrana Vicente et al. | Jul 2024 | B2 |
20020030692 | Griesert | Mar 2002 | A1 |
20050062738 | Handley et al. | Mar 2005 | A1 |
20050248299 | Chemel et al. | Nov 2005 | A1 |
20100302245 | Best | Dec 2010 | A1 |
20110142321 | Huffman | Jun 2011 | A1 |
20120113223 | Hilliges et al. | May 2012 | A1 |
20120170089 | Kim et al. | Jul 2012 | A1 |
20120249741 | Maciocci et al. | Oct 2012 | A1 |
20130222227 | Johansson et al. | Aug 2013 | A1 |
20130321462 | Salter et al. | Dec 2013 | A1 |
20130332890 | Ramic et al. | Dec 2013 | A1 |
20140040832 | Regelous | Feb 2014 | A1 |
20140078176 | Kim et al. | Mar 2014 | A1 |
20140104206 | Anderson | Apr 2014 | A1 |
20140129990 | Xin et al. | May 2014 | A1 |
20140333666 | Poulos et al. | Nov 2014 | A1 |
20140368620 | Li et al. | Dec 2014 | A1 |
20150067580 | Um et al. | Mar 2015 | A1 |
20150121466 | Brands et al. | Apr 2015 | A1 |
20150123901 | Schwesinger et al. | May 2015 | A1 |
20150153833 | Pinault et al. | Jun 2015 | A1 |
20150317831 | Ebstyne et al. | Nov 2015 | A1 |
20150331576 | Piya et al. | Nov 2015 | A1 |
20160098093 | Cheon et al. | Apr 2016 | A1 |
20160189426 | Thomas et al. | Jun 2016 | A1 |
20160225164 | Tomlin et al. | Aug 2016 | A1 |
20160291922 | Montgomerie et al. | Oct 2016 | A1 |
20170032568 | Gharpure et al. | Feb 2017 | A1 |
20170052595 | Poulos et al. | Feb 2017 | A1 |
20170053383 | Heo | Feb 2017 | A1 |
20170178392 | Zuccarino et al. | Jun 2017 | A1 |
20170213388 | Margolis et al. | Jul 2017 | A1 |
20170221264 | Perry | Aug 2017 | A1 |
20170243352 | Kutliroff et al. | Aug 2017 | A1 |
20170251143 | Peruch et al. | Aug 2017 | A1 |
20170270715 | Lindsay et al. | Sep 2017 | A1 |
20170287215 | Lalonde et al. | Oct 2017 | A1 |
20170287225 | Powderly et al. | Oct 2017 | A1 |
20170351094 | Poulos et al. | Dec 2017 | A1 |
20180005433 | Kohler et al. | Jan 2018 | A1 |
20180088787 | Bereza et al. | Mar 2018 | A1 |
20180103209 | Fischler et al. | Apr 2018 | A1 |
20180122138 | Piya et al. | May 2018 | A1 |
20180130255 | Hazeghi et al. | May 2018 | A1 |
20180143693 | Calabrese et al. | May 2018 | A1 |
20180173404 | Smith | Jun 2018 | A1 |
20180197341 | Loberg et al. | Jul 2018 | A1 |
20180348986 | Sawaki | Dec 2018 | A1 |
20190018479 | Minami | Jan 2019 | A1 |
20190018498 | West et al. | Jan 2019 | A1 |
20190050062 | Chen et al. | Feb 2019 | A1 |
20190130622 | Hoover et al. | May 2019 | A1 |
20190155495 | Klein et al. | May 2019 | A1 |
20190164340 | Pejic et al. | May 2019 | A1 |
20190228589 | Dascola | Jul 2019 | A1 |
20190340832 | Srinivasan et al. | Nov 2019 | A1 |
20190349575 | Knepper et al. | Nov 2019 | A1 |
20190362557 | Lacey et al. | Nov 2019 | A1 |
20200005539 | Hwang et al. | Jan 2020 | A1 |
20200045249 | Francois et al. | Feb 2020 | A1 |
20200048825 | Schultz et al. | Feb 2020 | A1 |
20200128227 | Chavez et al. | Apr 2020 | A1 |
20200135141 | Day et al. | Apr 2020 | A1 |
20200214682 | Zaslavsky et al. | Jul 2020 | A1 |
20200286299 | Wang et al. | Sep 2020 | A1 |
20200379626 | Guyomard et al. | Dec 2020 | A1 |
20210034163 | Goel et al. | Feb 2021 | A1 |
20210034319 | Wang et al. | Feb 2021 | A1 |
20210225043 | Tang et al. | Jul 2021 | A1 |
20210241483 | Dryer et al. | Aug 2021 | A1 |
20210279967 | Gernoth et al. | Sep 2021 | A1 |
20210295592 | Von Cramon | Sep 2021 | A1 |
20210374221 | Markhasin et al. | Dec 2021 | A1 |
20210383097 | Guerard et al. | Dec 2021 | A1 |
20220083145 | Matsunaga et al. | Mar 2022 | A1 |
20220084279 | Lindmeier et al. | Mar 2022 | A1 |
20220121344 | Pastrana Vicente et al. | Apr 2022 | A1 |
20220148257 | Boubekeur et al. | May 2022 | A1 |
20220253136 | Holder et al. | Aug 2022 | A1 |
20220317776 | Sundstrom et al. | Oct 2022 | A1 |
20220326837 | Dessero et al. | Oct 2022 | A1 |
20220335697 | Harding et al. | Oct 2022 | A1 |
20220382385 | Chen et al. | Dec 2022 | A1 |
20220397962 | Goel et al. | Dec 2022 | A1 |
20220408164 | Lee et al. | Dec 2022 | A1 |
20220413691 | Becker et al. | Dec 2022 | A1 |
20220414975 | Becker et al. | Dec 2022 | A1 |
20220415094 | Kim et al. | Dec 2022 | A1 |
20230027040 | Wang et al. | Jan 2023 | A1 |
20230030699 | Zion et al. | Feb 2023 | A1 |
20230031832 | Lipton et al. | Feb 2023 | A1 |
20230032771 | Zion et al. | Feb 2023 | A1 |
20230076326 | Xu et al. | Mar 2023 | A1 |
20230103161 | Li et al. | Mar 2023 | A1 |
20230119162 | Lipton et al. | Apr 2023 | A1 |
20230152935 | Mckenzie et al. | May 2023 | A1 |
20230168745 | Yoda | Jun 2023 | A1 |
20230290042 | Casella et al. | Sep 2023 | A1 |
20230377259 | Becker et al. | Nov 2023 | A1 |
20230377299 | Becker et al. | Nov 2023 | A1 |
20230377300 | Becker et al. | Nov 2023 | A1 |
20240037886 | Chiu et al. | Feb 2024 | A1 |
20240103636 | Lindmeier et al. | Mar 2024 | A1 |
20240104875 | Couche et al. | Mar 2024 | A1 |
20240104876 | Couche et al. | Mar 2024 | A1 |
20240233097 | Ngo et al. | Jul 2024 | A1 |
Number | Date | Country |
---|---|---|
3118722 | Jan 2017 | EP |
2540791 | Feb 2017 | GB |
10-2014-0097654 | Aug 2014 | KR |
10-2017-0027240 | Mar 2017 | KR |
10-2018-0102171 | Sep 2018 | KR |
10-2020-0110788 | Sep 2020 | KR |
10-2020-0135496 | Dec 2020 | KR |
2019172678 | Sep 2019 | WO |
2019213111 | Nov 2019 | WO |
2022147146 | Jul 2022 | WO |
Entry |
---|
GamedDBharat, “I want to rotate a object on double tap , Can any One help me with this?”, posted on Jul. 26, 2017. https://discussions.unity.com/t/i-want-to-rotate-a-object-on-double-tap-can-any-one-help-me-with-this/192010 (Year: 2017). |
Cas and Chary XR, “Oculus Go & Your Phone As 2nd Controller!—An Inexpensive Way To Play PC VR Games”, posted on Mar. 8, 2019. https://www.youtube.com/watch?v=i_iRVa0kemw (Year: 2019). |
Adding Environments, Available online at: https://manual.keyshot.com/manual/environments/adding-environments/, [retrieved on Jun. 9, 2023], 2 pages. |
Area Light, Available online at: https://manual.keyshot.com/manual/materials/material-types/light-sources/area-light/, [retrieved on Jun. 9, 2023], 24 pages. |
Artec Leo Full 3D Scanning Demo w/ Sample Data, Digitize Designs, LLC, Available online at: <https://www.youtube.com/watch?v=ecBKo_h3Pug>, [retrieved on Sep. 1, 2022], Feb. 22, 2019, 3 pages. |
Ex Parte Quayle Action received for U.S. Appl. No. 17/655,347, mailed on Jul. 8, 2024, 6 pages. |
Feature Highlights, Available online at: https://manual.keyshot.com/manual/whats-new/feature-highlights/, [retrieved on Jun. 9. 2023], 28 pages. |
Final Office Action received for U.S. Appl. No. 17/469,788, mailed on Nov. 16, 2023, 24 pages. |
Final Office Action received for U.S. Appl. No. 17/807,226, mailed on Nov. 30, 2023, 23 pages. |
Final Office Action received for U.S. Appl. No. 17/812,965, mailed on Jan. 31, 2024, 9 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/049520, mailed on Apr. 8, 2022, 8 pages. |
International Search Report received for PCT Patent Application No. PCT/US2022/071208, mailed on Aug. 18, 2022, 9 pages. |
International Search Report received for PCT Patent Application No. PCT/US2023/074955, mailed on Feb. 1, 2024, 6 pages. |
Light Manager, Available online at: https://manual.keyshot.com/manual/lighting/lighting-manager/, [retrieved on Jun. 9, 2023], 3 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/469,788, mailed on Mar. 2, 2023, 22 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/469,788, mailed on Mar. 21, 2024, 24 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/807,226, mailed on Jun. 26, 2023, 21 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/812,965, mailed on Jun. 8, 2023, 8 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/814,455, mailed on Feb. 16, 2024, 24 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/814,462, mailed on Feb. 1, 2024, 30 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/905,483, mailed on Mar. 27, 2024, 16 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/317,893, mailed on Apr. 25, 2024, 18 pages. |
Notice of Allowance received for U.S. Appl. No. 17/807,226, mailed on Jul. 3, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/807,236, mailed on Feb. 5, 2024, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 17/807,236, mailed on Jul. 10, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/317,893, mailed on Mar. 6, 2024, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 18/317,893, mailed on Nov. 22, 2023, 9 pages. |
Restriction Requirement received for U.S. Appl. No. 17/905,483, mailed on Dec. 7, 2023, 7 pages. |
Search Report received for United Kingdom Patent Application No. GB2210885.6, mailed on Jan. 27, 2023, 1 page. |
Locher et al., “Mobile Phone and Cloud—a Dream Team for 3D Reconstruction”, 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), 2016, pp. 1-8. |
Ro et al., “AR Pointer: Advanced Ray-Casting Interface Using Laser Pointer Metaphor for Object Manipulation in 3D Augmented Reality Environment”, Applied Sciences, vol. 9, No. 3078, [retrieved on Jul. 27, 2020], Jul. 30, 2019, 18 pages. |
Slambekova, Dana, “Gaze and Gesture Based Object Interaction in Virtual World”, [retrieved on Dec. 17, 2017]. Retrieved from the Internet: <URL:https://www.cs.rit.edu/˜dxs4659/Report.pdf>, May 31, 2012, 54 pages. |
Final Office Action received for U.S. Appl. No. 17/814,462, mailed on Nov. 1, 2024, 44 pages. |
Notice of Allowance received for U.S. Appl. No. 17/469,788, mailed on Oct. 15, 2024, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 17/655,347, mailed on Oct. 9, 2024, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 17/812,965, mailed on Jul. 26, 2024, 6 pages. |
Notice of Allowance received for U.S. Appl. No. 17/812,965, mailed on Nov. 15, 2024, 6 pages. |
Notice of Allowance received for U.S. Appl. No. 17/814,455, mailed on Oct. 7, 2024, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 18/473,180, mailed on Aug. 22, 2024, 13 pages. |
Number | Date | Country | |
---|---|---|---|
63083022 | Sep 2020 | US |