The present disclosure generally relates to navigating windows and, in particular, to systems, methods, and methods for navigating windows in 3D.
Current web browsers may use a tab arrangement for open web pages and also provide a means for viewing browsing history. This organization structure makes it difficult to concurrently view past and present web pages and/or searches.
So that the present disclosure can be understood by those of ordinary skill in the art, a more detailed description may be had by reference to aspects of some illustrative implementations, some of which are shown in the accompanying drawings.
In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method, or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures.
Various implementations disclosed herein include devices, systems, and methods for navigating windows in 3D. According to some implementations, the method is performed at a computing system including non-transitory memory and one or more processors, wherein the computing system is communicatively coupled to a display device and one or more input devices. The method includes: displaying a first content pane with a first appearance at a first z-depth within an extended reality (XR) environment, wherein the first content pane includes first content and an input field: detecting a user input directed to the input field: and, in response to detecting the user input: moving the first content pane to a second z-depth within the XR environment, wherein the second z-depth is different from the first z-depth; modifying the first content pane by changing the first content pane from the first appearance to a second appearance: and displaying a second content pane with the first appearance at the first z-depth within the XR environment.
In accordance with some implementations, an electronic device includes one or more displays, one or more processors, a non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing or causing performance of any of the methods described herein. In accordance with some implementations, a non-transitory computer readable storage medium has stored therein instructions, which, when executed by one or more processors of a device, cause the device to perform or cause performance of any of the methods described herein. In accordance with some implementations, a device includes: one or more displays, one or more processors, a non-transitory memory, and means for performing or causing performance of any of the methods described herein.
In accordance with some implementations, a computing system includes one or more processors, non-transitory memory, an interface for communicating with a display device and one or more input devices, and one or more programs: the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing or causing performance of the operations of any of the methods described herein. In accordance with some implementations, a non-transitory computer readable storage medium has stored therein instructions which when executed by one or more processors of a computing system with an interface for communicating with a display device and one or more input devices, cause the computing system to perform or cause performance of the operations of any of the methods described herein. In accordance with some implementations, a computing system includes one or more processors, non-transitory memory, an interface for communicating with a display device and one or more input devices, and means for performing or causing performance of the operations of any of the methods described herein.
Numerous details are described in order to provide a thorough understanding of the example implementations shown in the drawings. However, the drawings merely show some example aspects of the present disclosure and are therefore not to be considered limiting. Those of ordinary skill in the art will appreciate that other effective aspects and/or variants do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices, and circuits have not been described in exhaustive detail so as not to obscure more pertinent aspects of the example implementations described herein.
People may sense or interact with a physical environment or world without using an electronic device. Physical features, such as a physical object or surface, may be included within a physical environment. For instance, a physical environment may correspond to a physical city having physical buildings, roads, and vehicles. People may directly sense or interact with a physical environment through various means, such as smell, sight, taste, hearing, and touch. This can be in contrast to an extended reality (XR) environment that may refer to a partially or wholly simulated environment that people may sense or interact with using an electronic device. The XR environment may include virtual reality (VR) content, mixed reality (MR) content, augmented reality (AR) content, or the like. Using an XR system, a portion of a person's physical motions, or representations thereof, may be tracked and, in response, properties of virtual objects in the XR environment may be changed in a way that complies with at least one law of nature. For example, the XR system may detect a user's head movement and adjust auditory and graphical content presented to the user in a way that simulates how sounds and views would change in a physical environment. In other examples, the XR system may detect movement of an electronic device (e.g., a laptop, tablet, mobile phone, or the like) presenting the XR environment. Accordingly, the XR system may adjust auditory and graphical content presented to the user in a way that simulates how sounds and views would change in a physical environment. In some instances, other inputs, such as a representation of physical motion (e.g., a voice command), may cause the XR system to adjust properties of graphical content.
Numerous types of electronic systems may allow a user to sense or interact with an XR environment. A non-exhaustive list of examples includes lenses having integrated display capability to be placed on a user's eyes (e.g., contact lenses), heads-up displays (HUDs), projection-based systems, head mountable systems, windows or windshields having integrated display technology, headphones/earphones, input systems with or without haptic feedback (e.g., handheld or wearable controllers), smartphones, tablets, desktop/laptop computers, and speaker arrays. Head mountable systems may include an opaque display and one or more speakers. Other head mountable systems may be configured to receive an opaque external display, such as that of a smartphone. Head mountable systems may capture images/video of the physical environment using one or more image sensors or capture audio of the physical environment using one or more microphones. Instead of an opaque display, some head mountable systems may include a transparent or translucent display. Transparent or translucent displays may direct light representative of images to a user's eyes through a medium, such as a hologram medium, optical waveguide, an optical combiner, optical reflector, other similar technologies, or combinations thereof. Various display technologies, such as liquid crystal on silicon, LEDs, μLEDs, OLEDs, laser scanning light source, digital light projection, or combinations thereof, may be used. In some examples, the transparent or translucent display may be selectively controlled to become opaque. Projection-based systems may utilize retinal projection technology that projects images onto a user's retina or may project virtual content into the physical environment, such as onto a physical surface or as a hologram.
In some implementations, the controller 110 is configured to manage and coordinate an XR experience (sometimes also referred to herein as a “XR environment” or a “virtual environment” or a “graphical environment”) for a user 150 and optionally other users. In some implementations, the controller 110 includes a suitable combination of software, firmware, and/or hardware. The controller 110 is described in greater detail below with respect to
In some implementations, the electronic device 120 is configured to present audio and/or video (A/V) content to the user 150. In some implementations, the electronic device 120 is configured to present a user interface (UI) and/or an XR environment 128 to the user 150. In some implementations, the electronic device 120 includes a suitable combination of software, firmware, and/or hardware. The electronic device 120 is described in greater detail below with respect to
According to some implementations, the electronic device 120 presents an XR experience to the user 150 while the user 150 is physically present within a physical environment 105 that includes a table 107 within the field-of-view (FOV) 111 of the electronic device 120. As such, in some implementations, the user 150 holds the electronic device 120 in his/her hand(s). In some implementations, while presenting the XR experience, the electronic device 120 is configured to present XR content (sometimes also referred to herein as “graphical content” or “virtual content”), including an XR cylinder 109, and to enable video pass-through of the physical environment 105 (e.g., including the table 107) on a display 122. For example, the XR environment 128, including the XR cylinder 109, is volumetric or three-dimensional (3D).
In one example, the XR cylinder 109 corresponds to display-locked content such that the XR cylinder 109 remains displayed at the same location on the display 122 as the FOV 111 changes due to translational and/or rotational movement of the electronic device 120. As another example, the XR cylinder 109 corresponds to world-locked content such that the XR cylinder 109 remains displayed at its origin location as the FOV 111 changes due to translational and/or rotational movement of the electronic device 120. As such, in this example, if the FOV 111 does not include the origin location, the XR environment 128 will not include the XR cylinder 109. For example, the electronic device 120 corresponds to a near-eye system, mobile phone, tablet, laptop, wearable computing device, or the like.
In some implementations, the display 122 corresponds to an additive display that enables optical see-through of the physical environment 105 including the table 107. For example, the display 122 corresponds to a transparent lens, and the electronic device 120 corresponds to a pair of glasses worn by the user 150. As such, in some implementations, the electronic device 120 presents a user interface by projecting the XR content (e.g., the XR cylinder 109) onto the additive display, which is, in turn, overlaid on the physical environment 105 from the perspective of the user 150. In some implementations, the electronic device 120 presents the user interface by displaying the XR content (e.g., the XR cylinder 109) on the additive display, which is, in turn, overlaid on the physical environment 105 from the perspective of the user 150.
In some implementations, the user 150 wears the electronic device 120 such as a near-eye system. As such, the electronic device 120 includes one or more displays provided to display the XR content (e.g., a single display or one for each eye). For example, the electronic device 120 encloses the FOV of the user 150. In such implementations, the electronic device 120 presents the XR environment 128 by displaying data corresponding to the XR environment 128 on the one or more displays or by projecting data corresponding to the XR environment 128 onto the retinas of the user 150.
In some implementations, the electronic device 120 includes an integrated display (e.g., a built-in display) that displays the XR environment 128. In some implementations, the electronic device 120 includes a head-mountable enclosure. In various implementations, the head-mountable enclosure includes an attachment region to which another device with a display can be attached. For example, in some implementations, the electronic device 120 can be attached to the head-mountable enclosure. In various implementations, the head-mountable enclosure is shaped to form a receptacle for receiving another device that includes a display (e.g., the electronic device 120). For example, in some implementations, the electronic device 120 slides/snaps into or otherwise attaches to the head-mountable enclosure. In some implementations, the display of the device attached to the head-mountable enclosure presents (e.g., displays) the XR environment 128. In some implementations, the electronic device 120 is replaced with an XR chamber, enclosure, or room configured to present XR content in which the user 150 does not wear the electronic device 120.
In some implementations, the controller 110 and/or the electronic device 120 cause an XR representation of the user 150 to move within the XR environment 128 based on movement information (e.g., body pose data, eye tracking data, hand/limb/finger/extremity tracking data, etc.) from the electronic device 120 and/or optional remote input devices within the physical environment 105. In some implementations, the optional remote input devices correspond to fixed or movable sensory equipment within the physical environment 105 (e.g., image sensors, depth sensors, infrared (IR) sensors, event cameras, microphones, etc.). In some implementations, each of the remote input devices is configured to collect/capture input data and provide the input data to the controller 110 and/or the electronic device 120 while the user 150 is physically within the physical environment 105. In some implementations, the remote input devices include microphones, and the input data includes audio data associated with the user 150 (e.g., speech samples). In some implementations, the remote input devices include image sensors (e.g., cameras), and the input data includes images of the user 150. In some implementations, the input data characterizes body poses of the user 150 at different times. In some implementations, the input data characterizes head poses of the user 150 at different times. In some implementations, the input data characterizes hand tracking information associated with the hands of the user 150 at different times. In some implementations, the input data characterizes the velocity and/or acceleration of body parts of the user 150 such as his/her hands. In some implementations, the input data indicates joint positions and/or joint orientations of the user 150. In some implementations, the remote input devices include feedback devices such as speakers, lights, or the like.
In some implementations, the one or more communication buses 204 include circuitry that interconnects and controls communications between system components. In some implementations, the one or more I/O devices 206 include at least one of a keyboard, a mouse, a touchpad, a touchscreen, a joystick, one or more microphones, one or more speakers, one or more image sensors, one or more displays, and/or the like.
The memory 220 includes high-speed random-access memory, such as dynamic random-access memory (DRAM), static random-access memory (SRAM), double-data-rate random-access memory (DDR RAM), or other random-access solid-state memory devices. In some implementations, the memory 220 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 220 optionally includes one or more storage devices remotely located from the one or more processing units 202. The memory 220 comprises a non-transitory computer readable storage medium. In some implementations, the memory 220 or the non-transitory computer readable storage medium of the memory 220 stores the following programs, modules and data structures, or a subset thereof described below with respect to
The operating system 230 includes procedures for handling various basic system services and for performing hardware dependent tasks.
In some implementations, a data obtainer 242 is configured to obtain data (e.g., captured image frames of the physical environment 105, presentation data, input data, user interaction data, camera pose tracking information, eye tracking information, head/body pose tracking information, hand/limb/finger/extremity tracking information, sensor data, location data, etc.) from at least one of the I/O devices 206 of the controller 110, the I/O devices and sensors 306 of the electronic device 120, and the optional remote input devices. To that end, in various implementations, the data obtainer 242 includes instructions and/or logic therefor, and heuristics and metadata therefor.
In some implementations, a mapper and locator engine 244 is configured to map the physical environment 105 and to track the position/location of at least the electronic device 120 or the user 150 with respect to the physical environment 105. To that end, in various implementations, the mapper and locator engine 244 includes instructions and/or logic therefor, and heuristics and metadata therefor.
In some implementations, a data transmitter 246 is configured to transmit data (e.g., presentation data such as rendered image frames associated with the XR environment, location data, etc.) to at least the electronic device 120 and optionally one or more other devices. To that end, in various implementations, the data transmitter 246 includes instructions and/or logic therefor, and heuristics and metadata therefor.
In some implementations, a privacy architecture 408 is configured to ingest input data and filter user information and/or identifying information within the input data based on one or more privacy filters. The privacy architecture 408 is described in more detail below with reference to
In some implementations, an eye tracking engine 412 is configured to determine an eye tracking vector 413 (e.g., with a gaze direction) based on the input data and update the eye tracking vector 413 over time as shown in
In some implementations, a body/head pose tracking engine 414 is configured to determine a pose characterization vector 415 based on the input data and update the pose characterization vector 415 over time. For example, as shown in
In some implementations, a content selector 422 is configured to select XR content (sometimes also referred to herein as “graphical content” or “virtual content”) from a content library 425 based on one or more user requests and/or inputs (e.g., a voice command, a selection from a user interface (UI) menu of XR content items, and/or the like). The content selector 422 is described in more detail below with reference to
In some implementations, the content library 425 includes a plurality of content items such as audio/visual (A/V) content, virtual agents (VAs), and/or XR content, objects, items, scenery, etc. As one example, the XR content includes 3D reconstructions of user captured videos, movies, TV episodes, and/or other XR content. In some implementations, the content library 425 is pre-populated or manually authored by the user 150. In some implementations, the content library 425 is located local relative to the controller 110. In some implementations, the content library 425 is located remote from the controller 110 (e.g., at a remote server, a cloud server, or the like).
In some implementations, a content manager 430 is configured to manage and update the layout, setup, structure, and/or the like for the XR environment 128 including one or more of VA(s), XR content, one or more user interface (UI) elements associated with the XR content, and/or the like. The content manager 430 is described in more detail below with reference to
In some implementations, the content updater 436 is configured to modify the XR environment 128 over time based on translational or rotational movement, user commands, user inputs, and/or the like. To that end, in various implementations, the content updater 436 includes instructions and/or logic therefor, and heuristics and metadata therefor.
In some implementations, the feedback engine 438 is configured to generate sensory feedback (e.g., visual feedback such as text or lighting changes, audio feedback, haptic feedback, etc.) associated with the XR environment 128. To that end, in various implementations, the feedback engine 438 includes instructions and/or logic therefor, and heuristics and metadata therefor.
In some implementations, a rendering engine 450) is configured to render an XR environment 128 (sometimes also referred to herein as a “graphical environment” or “virtual environment”) or image frame associated therewith as well as the VA(s), XR content, one or more UI elements associated with the XR content, and/or the like. To that end, in various implementations, the rendering engine 450 includes instructions and/or logic therefor, and heuristics and metadata therefor. In some implementations, the rendering engine 450) includes a pose determiner 452, a renderer 454, an optional image processing architecture 462, and an optional compositor 464. One of ordinary skill in the art will appreciate that the optional image processing architecture 462 and the optional compositor 464 may be present for video pass-through configuration but may be removed for fully VR or optical see-through configurations.
In some implementations, the pose determiner 452 is configured to determine a current camera pose of the electronic device 120 and/or the user 150 relative to the A/V content and/or XR content. The pose determiner 452 is described in more detail below with reference to
In some implementations, the renderer 454 is configured to render the A/V content and/or the XR content according to the current camera pose relative thereto. The renderer 454 is described in more detail below with reference to
In some implementations, the image processing architecture 462 is configured to obtain (e.g., receive, retrieve, or capture) an image stream including one or more images of the physical environment 105 from the current camera pose of the electronic device 120 and/or the user 150. In some implementations, the image processing architecture 462 is also configured to perform one or more image processing operations on the image stream such as warping, color correction, gamma correction, sharpening, noise reduction, white balance, and/or the like. The image processing architecture 462 is described in more detail below with reference to
In some implementations, the compositor 464 is configured to composite the rendered A/V content and/or XR content with the processed image stream of the physical environment 105 from the image processing architecture 462 to produce rendered image frames of the XR environment 128 for display. The compositor 464 is described in more detail below with reference to
Although the data obtainer 242, the mapper and locator engine 244, the data transmitter 246, the privacy architecture 408, the eye tracking engine 412, the body/head pose tracking engine 414, the content selector 422, the content manager 430, and the rendering engine 450 are shown as residing on a single device (e.g., the controller 110), it should be understood that in other implementations, any combination of the data obtainer 242, the mapper and locator engine 244, the data transmitter 246, the privacy architecture 408, the eye tracking engine 412, the body/head pose tracking engine 414, the content selector 422, the content manager 430, and the rendering engine 450 may be located in separate computing devices.
In some implementations, the functions and/or components of the controller 110 are combined with or provided by the electronic device 120 shown below in
In some implementations, the one or more communication buses 304 include circuitry that interconnects and controls communications between system components. In some implementations, the one or more I/O devices and sensors 306 include at least one of an inertial measurement unit (IMU), an accelerometer, a gyroscope, a magnetometer, a thermometer, one or more physiological sensors (e.g., blood pressure monitor, heart rate monitor, blood oximetry monitor, blood glucose monitor, etc.), one or more microphones, one or more speakers, a haptics engine, a heating and/or cooling unit, a skin shear engine, one or more depth sensors (e.g., structured light, time-of-flight, LiDAR, or the like), a localization and mapping engine, an eye tracking engine, a body/head pose tracking engine, a hand/limb/finger/extremity tracking engine, a camera pose tracking engine, or the like.
In some implementations, the one or more displays 312 are configured to present the XR environment to the user. In some implementations, the one or more displays 312 are also configured to present flat video content to the user (e.g., a 2-dimensional or “flat” AVI, FLV, WMV, MOV, MP4, or the like file associated with a TV episode or a movie, or live video pass-through of the physical environment 105). In some implementations, the one or more displays 312 correspond to touchscreen displays. In some implementations, the one or more displays 312 correspond to holographic, digital light processing (DLP), liquid-crystal display (LCD), liquid-crystal on silicon (LCoS), organic light-emitting field-effect transitory (OLET), organic light-emitting diode (OLED), surface-conduction electron-emitter display (SED), field-emission display (FED), quantum-dot light-emitting diode (QD-LED), micro-electro-mechanical system (MEMS), and/or the like display types. In some implementations, the one or more displays 312 correspond to diffractive, reflective, polarized, holographic, etc. waveguide displays. For example, the electronic device 120 includes a single display. In another example, the electronic device 120 includes a display for each eye of the user. In some implementations, the one or more displays 312 are capable of presenting AR and VR content. In some implementations, the one or more displays 312 are capable of presenting AR or VR content.
In some implementations, the image capture device 370 correspond to one or more RGB cameras (e.g., with a complementary metal-oxide-semiconductor (CMOS) image sensor or a charge-coupled device (CCD) image sensor), IR image sensors, event-based cameras, and/or the like. In some implementations, the image capture device 370 includes a lens assembly, a photodiode, and a front-end architecture. In some implementations, the image capture device 370 includes exterior-facing and/or interior-facing image sensors.
The memory 320 includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM, or other random-access solid-state memory devices. In some implementations, the memory 320 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 320 optionally includes one or more storage devices remotely located from the one or more processing units 302. The memory 320 comprises a non-transitory computer readable storage medium. In some implementations, the memory 320 or the non-transitory computer readable storage medium of the memory 320 stores the following programs, modules and data structures, or a subset thereof including an optional operating system 330 and a presentation engine 340.
The operating system 330 includes procedures for handling various basic system services and for performing hardware dependent tasks. In some implementations, the presentation engine 340 is configured to present media items and/or XR content to the user via the one or more displays 312. To that end, in various implementations, the presentation engine 340 includes a data obtainer 342, a presenter 470, an interaction handler 420, and a data transmitter 350.
In some implementations, the data obtainer 342 is configured to obtain data (e.g., presentation data such as rendered image frames associated with the user interface or the XR environment, input data, user interaction data, head tracking information, camera pose tracking information, eye tracking information, hand/limb/finger/extremity tracking information, sensor data, location data, etc.) from at least one of the I/O devices and sensors 306 of the electronic device 120, the controller 110, and the remote input devices. To that end, in various implementations, the data obtainer 342 includes instructions and/or logic therefor, and heuristics and metadata therefor.
In some implementations, the interaction handler 420 is configured to detect user interactions with the presented A/V content and/or XR content (e.g., gestural inputs detected via hand tracking, eye gaze inputs detected via eye tracking, voice commands, etc.). To that end, in various implementations, the interaction handler 420 includes instructions and/or logic therefor, and heuristics and metadata therefor.
In some implementations, the presenter 470 is configured to present and update A/V content and/or XR content (e.g., the rendered image frames associated with the user interface or the XR environment 128 including the VA(s), the XR content, one or more UI elements associated with the XR content, and/or the like) via the one or more displays 312. To that end, in various implementations, the presenter 470 includes instructions and/or logic therefor, and heuristics and metadata therefor.
In some implementations, the data transmitter 350 is configured to transmit data (e.g., presentation data, location data, user interaction data, head tracking information, camera pose tracking information, eye tracking information, hand/limb/finger/extremity tracking information, etc.) to at least the controller 110. To that end, in various implementations, the data transmitter 350 includes instructions and/or logic therefor, and heuristics and metadata therefor.
Although the data obtainer 342, the interaction handler 420, the presenter 470, and the data transmitter 350 are shown as residing on a single device (e.g., the electronic device 120), it should be understood that in other implementations, any combination of the data obtainer 342, the interaction handler 420, the presenter 470, and the data transmitter 350 may be located in separate computing devices.
Moreover,
As shown in
Similarly, as shown in
According to some implementations, the privacy architecture 408 ingests the local sensor data 403 and the remote sensor data 405. In some implementations, the privacy architecture 408 includes one or more privacy filters associated with user information and/or identifying information. In some implementations, the privacy architecture 408 includes an opt-in feature where the electronic device 120 informs the user 150 as to what user information and/or identifying information is being monitored and how the user information and/or the identifying information will be used. In some implementations, the privacy architecture 408 selectively prevents and/or limits content delivery architecture 400 or portions thereof from obtaining and/or transmitting the user information. To this end, the privacy architecture 408 receives user preferences and/or selections from the user 150 in response to prompting the user 150 for the same. In some implementations, the privacy architecture 408 prevents the content delivery architecture 400 from obtaining and/or transmitting the user information unless and until the privacy architecture 408 obtains informed consent from the user 150. In some implementations, the privacy architecture 408 anonymizes (e.g., scrambles, obscures, encrypts, and/or the like) certain types of user information. For example, the privacy architecture 408 receives user inputs designating which types of user information the privacy architecture 408 anonymizes. As another example, the privacy architecture 408 anonymizes certain types of user information likely to include sensitive and/or identifying information, independent of user designation (e.g., automatically).
According to some implementations, the eye tracking engine 412 obtains the local sensor data 403 and the remote sensor data 405 after it has been subjected to the privacy architecture 408. In some implementations, the eye tracking engine 412 determines an eye tracking vector 413 based on the input data and updates the eye tracking vector 413 over time.
For example, the gaze direction indicates a point (e.g., associated with x, y, and z coordinates relative to the physical environment 105 or the world at-large), a physical object, or a region of interest (ROI) in the physical environment 105 at which the user 150 is currently looking. As another example, the gaze direction indicates a point (e.g., associated with x, y, and z coordinates relative to the XR environment 128), an XR object, or a region of interest (ROI) in the XR environment 128 at which the user 150 is currently looking.
According to some implementations, the body/head pose tracking engine 414 obtains the local sensor data 403 and the remote sensor data 405 after it has been subjected to the privacy architecture 408. In some implementations, the body/head pose tracking engine 414 determines a pose characterization vector 415 based on the input data and updates the pose characterization vector 415 over time.
According to some implementations, the interaction handler 420 obtains (e.g., receives, retrieves, or detects) one or more user inputs 421 provided by the user 150 that are associated with selecting A/V content, one or more VAs, and/or XR content for presentation. For example, the one or more user inputs 421 correspond to a gestural input selecting XR content from a UI menu detected via hand/extremity tracking, an eye gaze input selecting XR content from the UI menu detected via eye tracking, a voice command selecting XR content from the UI menu detected via a microphone, and/or the like. In some implementations, the content selector 422 selects XR content 427 from the content library 425 based on one or more user inputs 421 (e.g., a voice command, a selection from a menu of XR content items, and/or the like).
In various implementations, the content manager 430 manages and updates the layout, setup, structure, and/or the like for the XR environment 128 including one or more of VAs, XR content, one or more UI elements associated with the XR content, and/or the like. To that end, the content manager 430 includes the frame buffer 434, the content updater 436, and the feedback engine 438.
In some implementations, the frame buffer 434 includes XR content, a rendered image frame, and/or the like for one or more past instances and/or frames. In some implementations, the content updater 436 modifies the XR environment 128 over time based on the eye tracking vector 413, the pose characterization vector 415, user inputs 421 associated with modifying and/or manipulating the XR content or VA(s), translational or rotational movement of objects within the physical environment 105, translational or rotational movement of the electronic device 120 (or the user 150), and/or the like. In some implementations, the feedback engine 438 generates sensory feedback (e.g., visual feedback such as text or lighting changes, audio feedback, haptic feedback, etc.) associated with the XR environment 128.
According to some implementations, the pose determiner 452 determines a current camera pose of the electronic device 120 and/or the user 150 relative to the XR environment 128 and/or the physical environment 105 based at least in part on the pose characterization vector 415. In some implementations, the renderer 454 renders the VA(s), the XR content 427, one or more UI elements associated with the XR content, and/or the like according to the current camera pose relative thereto.
According to some implementations, the optional image processing architecture 462 obtains an image stream from an image capture device 370 including one or more images of the physical environment 105 from the current camera pose of the electronic device 120 and/or the user 150. In some implementations, the image processing architecture 462 also performs one or more image processing operations on the image stream such as warping, color correction, gamma correction, sharpening, noise reduction, white balance, and/or the like. In some implementations, the optional compositor 464 composites the rendered XR content with the processed image stream of the physical environment 105 from the image processing architecture 462 to produce rendered image frames of the XR environment 128. In various implementations, the presenter 470 presents the rendered image frames of the XR environment 128 to the user 150 via the one or more displays 312. One of ordinary skill in the art will appreciate that the optional image processing architecture 462 and the optional compositor 464 may not be applicable for fully virtual environments (or optical see-through scenarios).
As shown in
In other words, in some implementations, the electronic device 120 is configured to present XR content and to enable optical see-through or video pass-through of at least a portion of the physical environment 105 on the display 122 (e.g., the door 115). For example, the electronic device 120 corresponds to a mobile phone, tablet, laptop, near-eye system, wearable computing device, or the like.
As shown in
As shown in
As shown in
As shown in
As shown in
In
As shown in
In some implementations, in response to detecting selection of one of the search results 544A, 544B, and 544N within the second content plane 542A with a first input type (e.g., a speech input, touch input, hand tracking input, eye tracking input, or the like), the electronic device 120 ceases to display the second content plane 542A and presents content associated with the selected search result within the first content pane 514A with the first appearance and the first z-depth value within the XR environment 128. In some implementations, in response to detecting selection of one of the search results 544A, 544B, and 544N within the second content plane 542A with a second input type (e.g., a pinch and pull gesture, another gesture, or the like), the electronic device 120 may open an associated web page or display associated content in a new tab within a web browser application. In some implementations, in response to detecting selection of one of the search results 544A, 544B, and 544N within the second content plane 542A with a second input type (e.g., a pinch and pull gesture, another gesture, or the like), the electronic device 120 may open an associated web page or display associated content in a pane associated with a new stack or an existing stack of content panes. One of ordinary skill in the art will appreciate that the stacks of content panes may be configured similar to Provisional patent application No. 62/210,415, filed on Jun. 14, 2021 (Attorney Docket No. 27753-50477PR1), which is incorporated by reference herein in its entirety.
As shown in
In some implementations, the second z-depth value associated with the second content pane 542B in
In some implementations, the second appearance associated with the second content pane 542B in
As shown in
As shown in
In some implementation, in response to detecting selection of the second content pane 542B in
As shown in
As shown in
As shown in
As discussed above, current web browsers may use a tab arrangement for open web pages and also provide a means for viewing browsing history. This organization structure makes it difficult to concurrently view past and present web pages and/or searches. While in an extended reality (XR) environment, a computing system content panes in z-depth where the panes are selectable/accessible via hand tracking inputs, eye tracking inputs, speech inputs, and/or the like. As such, in various implementations described herein, the selection of n input field (e.g., a search bar) within a first content pane causes the first content pane, including first content (e.g., a web page or other media), to be pushed backward in z-depth and a second content pane to be displayed at the former z-depth of the first content pane.
As represented by block 610, the method 600 includes displaying a first content pane with a first appearance at a first z-depth within an extended reality (XR) environment, wherein the first content pane includes first content and an input field. In
In some implementations, as represented by block 612, the first content pane corresponds to one of a web browser window, an application window, or an operating system window; and the first content corresponds to one of text, one or more images, one or more videos, or audio data. In some implementations, the first content pane is volumetric or three-dimensional (3D).
In some implementations, the first content pane is overlaid on the physical environment while displayed within the XR environment. As shown in
In some implementations, the display device corresponds to a transparent lens assembly, and wherein the XR environment is projected onto the transparent lens assembly. In some implementations, the display device corresponds to a near-eye system, and wherein presenting the XR environment includes compositing the XR environment with one or more images of a physical environment captured by an exterior-facing image sensor. In some implementations, the XR environment corresponds to AR content overlaid on the physical environment. In one example, the XR environment is associated with an optical see-through configuration. In another example, the XR environment is associated with a video pass-through configuration. In some implementations, the XR environment corresponds a VR environment with VR content.
As represented by block 620, the method 600 includes detecting a user input directed to the input field. In
As represented by block 630, in response to detecting the user input directed to the input field, the method 600 includes: moving the first content pane to a second z-depth within the XR environment, wherein the second z-depth is different from the first z-depth; modifying the first content pane by changing the first content pane from the first appearance to a second appearance: and displaying a second content pane with the first appearance at the first z-depth within the XR environment. In
In some implementations, as represented by block 632, the first appearance is different from the second appearance. In some implementations, as represented by block 634A, the second appearance is associated with a higher translucency value than the first appearance. In some implementations, as represented by block 634B, the second appearance is associated with a higher blur radius value than the first appearance. In some implementations, modifying the first content pane includes changing from the first appearance to a second appearance by blurring at least a portion of the first content pane. As one example, in
In some implementations, the second content pane includes at least one of: one or more previous search queries, one or more search recommendations, or one or more content recommendations based on at least one of one or more user preferences, a user search history, or current context. As one example, in
In some implementations, the user input includes a search string provided via a virtual keyboard or a speech input. In some implementations, the second content pane includes at least one of: one or more search results, one or more search recommendations, or one or more content recommendations based on the search string. As one example, in
In some implementations, the method 600 further includes: detecting a subsequent user input associated with selecting a respective search result among the one or more search results: and in response to detecting the subsequent user input: displaying a preview pane with the first appearance at the first z-depth within the XR environment, wherein the preview pane is associated with the respective search result; moving the first content pane to a third z-depth within the XR environment, wherein the third z-depth is different from the second z-depth: moving the second content pane to a fourth z-depth within the XR environment: and modifying the second content pane by changing the second content pane from the first appearance to the second appearance. In some implementations, the fourth z-depth value corresponds to a z-depth value that is different from the first z-depth value and less than the third z-depth value. In some implementations, the fourth z-depth value corresponds to the second z-depth value. For example, the subsequent user input corresponds to a gaze input directed to the respective search result for at least a predetermined amount of time such as X seconds. As another example, the subsequent user input corresponds to one of a touch input, speech input, hand tracking input, eye tracking input, gestural input, and/or the like. In some implementations, the preview pane at least partially overlaps the second content pane. In some implementations, the second content pane closes at least temporarily while the preview pane is presented.
As one example, in
As another example, in
In some implementations, the method 600 further includes: detecting a subsequent user input associated with selecting the first content pane: and in response to detecting the subsequent user input: moving the first content pane to the first z-depth within the XR environment: moving the second content pane to the second z-depth within the XR environment: modifying the second content pane by changing the second content pane from the first appearance to the second appearance: and modifying the first content pane by changing the first content pane from the second appearance to the first appearance. In some implementations, the subsequent user input corresponds to one of a hand tracking input, an eye tracking input, a touch input, a gestural input, or a speech input. As one example, in
In some implementations, the method 600 further includes: detecting a subsequent user input associated with manipulating the second content pane: and in response to detecting the subsequent user input, manipulating the second content pane based on the subsequent user input by at least one of: translating the second content pane, rotating the second content pane, scaling the second content pane, or modifying an appearance parameter of the second content pane. For example, the appearance parameter corresponds to one of a color, contrast, texture, brightness, etc. As one example, the user 150 may interact with the second content plane 542A in
While various aspects of implementations within the scope of the appended claims are described above, it should be apparent that the various features of implementations described above may be embodied in a wide variety of forms and that any specific structure and/or function described above is merely illustrative. Based on the present disclosure one skilled in the art should appreciate that an aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to or other than one or more of the aspects set forth herein.
It will also be understood that, although the terms “first”, “second”, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first media item could be termed a second media item, and, similarly, a second media item could be termed a first media item, which changing the meaning of the description, so long as the occurrences of the “first media item” are renamed consistently and the occurrences of the “second media item” are renamed consistently. The first media item and the second media item are both media items, but they are not the same media item.
The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the claims. As used in the description of the implementations and the appended claims, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/028702 | 5/11/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63195772 | Jun 2021 | US |