This relates generally to computer graphics editors and more specifically to devices, methods and user interfaces for three-dimensional previews of computer graphical objects.
Some computer graphical environments provide two-dimensional and/or three-dimensional environments where at least some objects displayed for a user's viewing are virtual and generated by a computer. In some uses, a user may create or modify extended reality (XR) environments, such as by editing, generating, or otherwise manipulating XR virtual objects using a content generation environment, such as a graphics editor or graphics editing interface running on a content creation application, for example. In one or more examples, creation or modification of XR environments, including content items (e.g., two-dimensional and/or three-dimensional objects) within the XR environments, may include generating and presenting, to the user, a preview of the content items at various intermediate stages of the content creation process. However, such previews of content items that are generated and presented to the user in two-dimensions are limited by the two-dimensional display and graphics processing characteristics of the device on which the content creation application runs. Editors that allow for intuitive editing of computer-generated virtual objects presented in in both two-dimensions and three-dimensions are thus desirable. Additionally, the previews of content items in a content creation application are detached from the context of the three-dimensional applications in which the content is to be presented. Simulators that allow for intuitive simulation (in two-dimension or three-dimensions) of content items in the context of the three-dimensional applications are thus desirable.
Some examples of the disclosure are directed to a preview of content (e.g., XR content also referred to herein as XR content item(s)) generated and presented at an electronic device in either a two-dimensional or three-dimensional environment (e.g., in a computer-generated environment). In one or more examples, while the three-dimensional preview of content is presented, one or more affordances are provided for interacting with the one or more computer-generated virtual objects of the preview. Additionally or alternatively, interacting with the one or more computer-generated virtual objects of the preview can be initiated through text-based command line inputs or other non-graphical techniques. In one or more examples, the one or more affordances/selectable options may be displayed on a user interface with the three-dimensional preview of content (e.g., displayed below, in front of, above, adjacent to, etc. the one or more virtual objects of the three-dimensional preview).
In one or more examples, the user interface associated with content preview includes a selectable option that when selected, displays a three-dimensional bounding volume concurrently with the preview of the three-dimensional object. Additionally or alternatively, the three-dimensional bounding volume can be automatically displayed without requiring user interaction to initiate display. The three-dimensional bound volume is generated using size information received at the electronic device associated with a content application. The three-dimensional bounding volume is displayed to show a size relationship between a three-dimensional virtual object being previewed, and the three-dimensional environment of the software application so that a user of the device can efficiently ascertain what portions of the three-dimensional virtual object will be clipped (i.e., not be visible) in the three-dimensional environment due to the portions exceeding the bounds of the three-dimensional environment of the content application or more generally ascertain how a content item will appear in a given three-dimensional environment. In one or more examples, the displayed bounding volume can include size labels, which provide numerical representations of the size of the bounding volume (e.g., length, width, height for a bounding box), for example, as the bounding volume exists in the three-dimensional environment. In one or more examples, the preview of the virtual object is received from a content application and used to generate the preview of the three-dimensional virtual object. By using both the size information associated with the three-dimensional object, and the size information associated with the three-dimensional environment (e.g., the three-dimensional environment associated with the application the content being previewed will be displayed in), the device can display a representation of both, to allow the user to determine what portions of the virtual object will be clipped.
Additionally or alternatively, the electronic device applies one or more visual indicators to the one or more portions of the three-dimensional virtual object preview that will be clipped due to exceeding the bounds of the three-dimensional environment of the content application. Optionally, the visual indicators include applying shading, highlighting, and/or color to the portions of the virtual object that will be clipped. Additionally or alternatively, applying the visual indicators include forgoing display of the clipped portions of an object, or displaying the clipped portions by displaying the clipped portions at a reduced opacity. Optionally, one or more visual indicators can also be applied to the portions of the virtual object that will be not clipped. The one or more visual indicators are configured to visually delineate the portions of the virtual object that will be clipped versus the portions of the virtual object that will not be clipped (i.e., because they fit within the bounds of the three-dimensional environment.) In one or more examples, the device displays the one or more visual indicators concurrently with the three-dimensional bounding volume associated with the three-dimensional environment described above. Additionally or alternatively, the device displays the one or more visual indicators without displaying the bounding volume. Additionally or alternatively, the bounding volume is displayed without the one or more visual indicators.
In one or more examples, the user interface associated with the content preview includes one or more content preview viewports that are configured to display a preview of a virtual object based on the state of the application code associated with the virtual object. In one or more examples, the one or more content preview viewports can operate in a pause mode or in a play mode, and the mode for a respective preview viewport can be toggled between a pause mode and play mode. When the content preview viewport is in the play mode, any modifications made to the application code associated with the virtual object will automatically cause the displayed content preview to be modified in accordance with the modification of the application code. When the content preview viewport is in pause mode, the device creates a snapshot of the application code at the time that the viewport enters the pause mode. While in the pause mode, the viewport will continue to display the preview of the virtual object based on the snapshot and will forgo updating the preview in response to any subsequent modifications to the application code until the respective viewport toggles to the play mode. In one or more examples, the user is able to restore the application code based on a snapshot stored on the electronic device.
The full descriptions of these examples are provided in the Drawings and the Detailed Description, and it is understood that this Summary does not limit the scope of the disclosure in any way.
For a better understanding of the various described examples, reference should be made to the Detailed Description below, in conjunction with the following drawings in which like reference numerals often refer to corresponding parts throughout the figures.
XR content (e.g., content that is meant to be displayed in a three-dimensional environment) can be previewed either in three-dimensions (e.g., by using a head mounted display) or on a two-dimensional display that is capable of rendering two-dimensional representations of three-dimensional objects. In one or more examples, an XR content application (e.g., a software application that is configured to display content in a three-dimensional environment) includes one or more virtual objects/content that are displayed in a three-dimensional virtual environment.
In one or more examples, XR content can be presented to the user via an XR data file (data file) (including script, executable code, etc.) that includes data representing the XR content and/or data describing how the XR content is to be presented. In one or more examples, the XR file includes data representing one or more XR scenes and one or more triggers for presentation of the one or more XR scenes. For example, an XR scene may be anchored to a horizontal, planar surface, such that when a horizontal, planar surface is detected (e.g., in the field of view of one or more cameras), the XR scene can be presented. The XR file can also include data regarding one or more virtual objects associated with the XR scene, and/or associated triggers and actions involving the XR virtual objects.
In order to simplify the generation of XR files and/or editing of computer-generated graphics generally, a content creation application including a content generation environment (e.g., an authoring environment graphical user interface (GUI)) can be used. In one or more examples, a content generation environment is itself an XR environment (e.g., a two-dimensional and/or three-dimensional environment). For example, a content generation environment can include one or more virtual objects and one or more representations of real-world objects. In one or more examples, the virtual objects are superimposed over a physical environment, or a representation thereof. In one or more examples, the physical environment is captured via one or more cameras of the electronic device and is actively displayed in the XR environment (e.g., via the display generation component). In one or more examples, the physical environment is (e.g., passively) provided by the electronic device, for example, if the display generation component includes a translucent or transparent element through which the user is able to see the physical environment.
In such a content generation environment, a user can create virtual objects from scratch (including the appearance of the virtual objects, behaviors/actions of the virtual objects, and/or triggers for the behaviors/actions of the virtual objects). Additionally or alternatively, virtual objects can be created by other content creators and imported into the content generation environment, where the virtual objects can be placed into an XR environment or scene. In one or more examples, virtual objects generated in a content generation environment or entire environments can be exported to other environments or XR scenes (e.g., via generating an XR file and importing or opening the XR file in a content creation application or XR viewer application).
Some examples of the disclosure are directed to an application for previewing of content generated and presented at an electronic device in a computer generated environment. In one or more examples, while the three-dimensional preview of content is presented, one or more affordances (i.e., selectable options on a user-interface) are provided for interacting with the one or more computer-generated virtual objects of the three-dimensional preview. In one or more examples, the one or more affordances may be displayed with the three-dimensional preview of content (e.g., displayed below, in front of, above, adjacent to, etc. the one or more virtual objects of the three-dimensional preview).
In some examples, the three-dimensional preview of content is configurable to operate in at least two modes. A first mode (e.g., play mode) can simulate run-time of the content in which one or more one or more actions (e.g., animations, audio clips, etc.) associated with the content can be performed. For example, one or more virtual objects of the three-dimensional preview may be animated to move, and one or more virtual objects can respond to an input to execute additional animations or other behaviors. A second mode (e.g., edit mode) can provide a three-dimensional preview of the content to allow a user to interact with the content for the purposes of editing the three-dimensional content. For example, a user may select an element in the three-dimensional content and the corresponding elements can be selected in the two-dimensional representation of the content generation environment.
In one or more examples, the object manipulator can include a first affordance that is selectable to cause the three-dimensional preview to operate in a first mode. In one or more examples, the object manipulator can include a second affordance that is selectable to cause the three-dimensional preview to operate in a second mode, different than the first mode. In one or more examples, the first and second affordances can be a singular affordance that toggles the mode (e.g., a play/pause button). In one or more examples, the object manipulator can include a third affordance that is selectable to scale dimensions of one or more virtual objects of the preview. In one or more examples, the object manipulator can include a fourth affordance that is selectable to step through executable code associated with playback of the previewed content, causing one or more actions (e.g., animations, audio clips, etc.) to be performed by the one or more virtual objects incrementally with each selection of the fourth affordance. In one or more examples, the object manipulator can also include a fifth affordance that is selectable to cause the three-dimensional preview to operate in a third mode, different than the first mode and the second mode. In the third mode, a full-scale representation of the one or more virtual objects of the three-dimensional preview is displayed within the three-dimensional environment.
As described herein, some examples of the disclosure are directed to user-interactions with and/or manipulations of a three-dimensional preview of content (e.g., XR content item) displayed on an electronic device. In one or more examples, a two-dimensional representation of an XR content item generated in a content creation application displayed on a first electronic device may be concurrently displayed with a three-dimensional preview of the XR content item on a second electronic device. In one or more examples, user interactions (e.g., user input, such as touch, tap, motion, reorientation, etc.) with the three-dimensional preview of the XR content item received at the second electronic device may cause the display of the three-dimensional preview of the XR content item to be updated according to the input. In one or more examples, the user input received at the second electronic device is communicated to the first electronic device in real time, such that the displays of the two-dimensional representation of the XR content item and the three-dimensional preview of the XR content item are optionally manipulated concurrently or nearly concurrently (e.g., within less than 50 ms of one another).
Manipulating the three-dimensional preview of the content may include altering an appearance of one or more virtual objects of the three-dimensional preview. In one or more examples, manipulations of the three-dimensional preview are optionally determined by the mode of operation of the electronic device presenting the three-dimensional environment. In one or more examples, changes to a viewpoint associated with the electronic device may alter a view of the three-dimensional preview visible by the user. In one or more examples, changes to the viewpoint associated with the electronic device may alter an orientation and/or a location of the object manipulator within the three-dimensional environment, such that the object manipulator optionally continues to face the user.
In the discussion that follows, an electronic device that is in communication with a display generation component and one or more input devices is described. It should be understood that the electronic device optionally is in communication with one or more other physical user-interface devices, such as touch-sensitive surface, a physical keyboard, a mouse, a joystick, a hand tracking device, an eye tracking device, a stylus, etc. Further, as described above, it should be understood that the described electronic device, display and touch-sensitive surface are optionally distributed amongst two or more devices. Therefore, as used in this disclosure, information displayed on the electronic device or by the electronic device is optionally used to describe information outputted by the electronic device for display on a separate display device (touch-sensitive or not). Similarly, as used in this disclosure, input received on the electronic device (e.g., touch input received on a touch-sensitive surface of the electronic device, or touch input received on the surface of a stylus) is optionally used to describe input received on a separate input device, from which the electronic device receives input information.
The device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, a television channel browsing application, and/or a digital video player application. Additionally, the device may support an application for generating or editing content for computer generated graphics and/or XR environments (e.g., an application with a content generation environment). Additionally, the device may support a three-dimensional graphic rendering application for generating and presenting XR content and/or XR environments in three-dimensions.
In one or more examples, as illustrated in
Communication circuitry 222A, 222B optionally includes circuitry for communicating with electronic devices, networks, such as the Internet, intranets, a wired network and/or a wireless network, cellular networks, and wireless local area networks (LANs). Communication circuitry 222A, 222B optionally includes circuitry for communicating using near-field communication (NFC) and/or short-range communication, such as Bluetooth®.
Processor(s) 218A, 218B include one or more general processors, one or more graphics processors, and/or one or more digital signal processors. In one or more examples, memory 220A, 220B is a non-transitory computer-readable storage medium (e.g., flash memory, random access memory, or other volatile or non-volatile memory or storage) that stores computer-readable instructions configured to be executed by processor(s) 218A, 218B to perform the techniques, processes, and/or methods described below. In one or more examples, memory 220A, 220B can include more than one non-transitory computer-readable storage medium. A non-transitory computer-readable storage medium can be any medium (e.g., excluding a signal) that can tangibly contain or store computer-executable instructions for use by or in connection with the instruction execution system, apparatus, or device. In one or more examples, the storage medium is a transitory computer-readable storage medium. In one or more examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium can include, but is not limited to, magnetic, optical, and/or semiconductor storages. Examples of such storage include magnetic disks, optical discs based on company disc (CD), digital versatile disc (DVD), or Blu-ray technologies, as well as persistent solid-state memory such as flash, solid-state drives, and the like.
In one or more examples, display generation component(s) 214A, 214B include a single display (e.g., a liquid-crystal display (LCD), organic light-emitting diode (OLED), or other types of display). In one or more examples, display generation component(s) 214A, 214B includes multiple displays. In one or more examples, display generation component(s) 214A, 214B can include a display with touch capability (e.g., a touch screen), a projector, a holographic projector, a retinal projector, etc. In one or more examples, device 270 includes touch-sensitive surface(s) 208 for receiving user inputs, such as tap inputs and swipe inputs or other gestures. In one or more examples, display generation component(s) 214B and touch-sensitive surface(s) 208 form touch-sensitive display(s) (e.g., a touch screen integrated with device 270 or external to device 270 that is in communication with device 270).
Device 270 optionally includes image sensor(s) 206. Image sensors(s) 206 optionally include one or more visible light image sensors, such as charged coupled device (CCD) sensors, and/or complementary metal-oxide-semiconductor (CMOS) sensors operable to obtain images of physical objects from the real-world environment. Image sensor(s) 206 also optionally include one or more infrared (IR) sensors, such as a passive or an active IR sensor, for detecting infrared light from the real-world environment. For example, an active IR sensor includes an IR emitter for emitting infrared light into the real-world environment. Image sensor(s) 206 also optionally include one or more cameras configured to capture movement of physical objects in the real-world environment. Image sensor(s) 206 also optionally include one or more depth sensors configured to detect the distance of physical objects from device 270. In one or more examples, information from one or more depth sensors can allow the device to identify and differentiate objects in the real-world environment from other objects in the real-world environment. In one or more examples, one or more depth sensors can allow the device to determine the texture and/or topography of objects in the real-world environment.
In one or more examples, device 270 uses CCD sensors, event cameras, and depth sensors in combination to detect the physical environment around device 270. In one or more examples, image sensor(s) 206 include a first image sensor and a second image sensor. The first image sensor and the second image sensor work in tandem and are optionally configured to capture different information of physical objects in the real-world environment. In one or more examples, the first image sensor is a visible light image sensor, and the second image sensor is a depth sensor. In one or more examples, device 270 uses image sensor(s) 206 to detect the position and orientation of device 270 and/or display generation component(s) 214 in the real-world environment. For example, device 270 uses image sensor(s) 206 to track the position and orientation of display generation component(s) 214B relative to one or more fixed objects in the real-world environment.
In one or more examples, device 270 includes microphone(s) 213 or other audio sensors. Device 270 uses microphone(s) 213 to detect sound from the user and/or the real-world environment of the user. In one or more examples, microphone(s) 213 includes an array of microphones (a plurality of microphones) that optionally operate in tandem, such as to identify ambient noise or to locate the source of sound in space of the real-world environment.
Device 270 includes location sensor(s) 204 for detecting a location of device 270 and/or display generation component(s) 214B. For example, location sensor(s) 204 can include a global positioning system (GPS) receiver that receives data from one or more satellites and allows device 270 to determine the device's absolute position in the physical world.
Device 270 includes orientation sensor(s) 210 for detecting orientation and/or movement of device 270 and/or display generation component(s) 214B. For example, device 270 uses orientation sensor(s) 210 to track changes in the position and/or orientation of device 270 and/or display generation component(s) 214B, such as with respect to physical objects in the real-world environment. Orientation sensor(s) 210 optionally include one or more gyroscopes and/or one or more accelerometers.
Device 270 includes hand tracking sensor(s) 202 and/or eye tracking sensor(s) 212, in one or more examples. Hand tracking sensor(s) 202 are configured to track the position/location of one or more portions of the user's hands, and/or motions of one or more portions of the user's hands with respect to the extended reality environment, relative to the display generation component(s) 214B, and/or relative to another defined coordinate system. Eye tracking sensor(s) 212 are configured to track the position and movement of a user's gaze (eyes, face, or head, more generally) with respect to the real-world or extended reality environment and/or relative to the display generation component(s) 214B. In one or more examples, hand tracking sensor(s) 202 and/or eye tracking sensor(s) 212 are implemented together with the display generation component(s) 214B. In one or more examples, the hand tracking sensor(s) 202 and/or eye tracking sensor(s) 212 are implemented separate from the display generation component(s) 214B.
In one or more examples, the hand tracking sensor(s) 202 can use image sensor(s) 206 (e.g., one or more IR cameras, 3D cameras, depth cameras, etc.) that capture three-dimensional information from the real-world including one or more hands (e.g., of a human user). In one or more examples, the hands can be resolved with sufficient resolution to distinguish fingers and their respective positions. In one or more examples, one or more image sensor(s) 206 are positioned relative to the user to define a field of view of the image sensor(s) 206 and an interaction space in which finger/hand position, orientation and/or movement captured by the image sensors are used as inputs (e.g., to distinguish from a user's resting hand or other hands of other persons in the real-world environment). Tracking the fingers/hands for input (e.g., gestures, touch, tap, etc.) can be advantageous in that it does not require the user to touch, hold or wear any sort of beacon, sensor, or other marker.
In one or more examples, eye tracking sensor(s) 212 includes at least one eye tracking camera (e.g., infrared (IR) cameras) and/or illumination sources (e.g., IR light sources, such as LEDs) that emit light towards a user's eyes. The eye tracking cameras may be pointed towards a user's eyes to receive reflected IR light from the light sources directly or indirectly from the eyes. In one or more examples, both eyes are tracked separately by respective eye tracking cameras and illumination sources, and a focus/gaze can be determined from tracking both eyes. In one or more examples, one eye (e.g., a dominant eye) is tracked by a respective eye tracking camera/illumination source(s).
Device 270 and system 250 are not limited to the components and configuration of
In the example of
In one or more examples, as shown in
As mentioned above, the preview of the content items is generated and presented to the user in two-dimensions, as illustrated in the example of
In one or more examples, the user may request to preview the two-dimensional representation of the content item (e.g., chair and table of the content 364) in three-dimensions. As an example, the content creation application 362 may display a menu 370 including one or more selectable graphical user interface elements (e.g., displayed in the authoring environment GUI, in the display GUI, or some other GUI in or in communication with the content creation application) that, when selected, generates the request. Additionally or alternatively, in one or more examples, the request may be inputted using one or more input devices 366 in communication with the electronic device 360, such as by pressing one or more keys on a keyboard, for example. As shown in the example of
Additionally, or alternatively, the three-dimensional environment 368 optionally includes a three-dimensional preview 334 presenting one or more virtual objects 332 corresponding to the content 364 shown in
For example, in one or more examples, the second electronic device optionally includes hand tracking sensors (e.g., corresponding to hand tracking sensors 202) and eye tracking sensors (e.g., corresponding to eye tracking sensors 212), which may allow the user to interact with and manipulate one or more virtual objects within the three-dimensional environment. As an example, the eye tracking sensors may track the gaze associated with one or both eyes of the user to determine the viewpoint associated with the second electronic device, and thus a direction of the viewpoint within the three-dimensional environment. The hand tracking sensors, for example, may track the movement of one or more fingers of the user to associate respective finger motions (e.g., touch/tap, pinch, drag, etc.) with one or more interactions with one or more elements of the three-dimensional environment. The user may provide input corresponding to selection and/or manipulation of one or more elements within the three-dimensional environment via the respective finger motions.
As shown in
As discussed herein, the three-dimensional preview of the content concurrently displayed with the two-dimensional representation of the content in the content creation application may provide the user with useful visual feedback regarding the appearance of the content in three-dimensions which may otherwise not be provided via the two-dimensional representation. In one or more examples, edits or modifications to the data file running in the content creation application may produce corresponding changes to the appearance of the three-dimensional preview displayed at the second electronic device. As an example, the user may wish to edit or modify one or more features of the content items and view a new three-dimensional preview of the content item in accordance with the edits or modifications. For example, the user may, via one or more input devices in communication with the first electronic device (e.g., via a keyboard), rewrite portions of the script, executable code, etc. of the data file while the two-dimensional representation of the content item is displayed on the first electronic device and the three-dimensional preview of the content item is concurrently displayed on the second electronic device. The user may finalize the edits or modifications (e.g., by saving the changes to the data file) and may request a new preview of the content item representing the data file. Additionally or alternatively, the new preview may be automatically requested once the edits or modifications are finalized by the user. The new (e.g., newly updated) data may be transferred from the content creation application to the three-dimensional rendering application in the manner described above, and the three-dimensional preview of the content item currently displayed on the second electronic device may be updated according to the corresponding changes to the two-dimensional representation of the content item displayed on the first electronic device, such that the three-dimensional preview of the content item has an updated appearance.
As mentioned above, the three-dimensional preview 334 of the content item may be displayed on the second electronic device while the two-dimensional representation 364′ of the content item is concurrently displayed on the representation 360′ of the first electronic device. In one or more examples, the two-dimensional representation of and the three-dimensional preview of the content may be provided at a single electronic device (e.g., a laptop computer, desktop computer, mobile device, etc.), rather than separately provided at two electronic devices. For example, the three-dimensional graphic rendering application may be provided within or at least partially as, a simulator configured to display a computer-generated environment in three-dimensions. In some such examples, in response to receiving a request to display a content item in three-dimensions, the three-dimensional graphic rendering application may generate and present a preview of the content in three-dimensions within the computer-generated environment of the simulator (e.g., in a different window on the display of the electronic device). In some such examples, the two-dimensional representation of the content may be displayed within the display GUI of the content creation application while the three-dimensional preview of the content is concurrently displayed within the computer-generated environment of the simulator, for example. Additionally or alternatively, in one or more examples, the content creation application may be communicatively linked directly to the three-dimensional graphic rendering application. In some such examples, all or portions of the script, executable code, etc. of the data file at 310 may be transferred directly to the three-dimensional graphic rendering application at 318.
In one or more examples, the graphical data communicated along the communication channel between the first electronic device 360 and the second electronic device may be synchronized. In some such examples, for example, the communication channel between the content creation application 362 and the three-dimensional rendering application (not shown) may be bidirectional, allowing data to be selectively transferred therebetween in either direction, as dictated by the operating mode of the second electronic device (e.g., the head-mounted display).
In one or more examples of the disclosure, content preview applications store a duplicate copy of application source code and use the duplicate copy to preview the content items associated with the application. This allow a developer to manipulate the content item in ways which are not possible in the constraints imposed by conventional simulation applications. In some examples (and as described in further detail below), a preview application using a duplicate copy of the source code allows for full camera controls (so that the developer can view the content item from multiple perspectives), content selection, and custom rendering of virtual objects (e.g., by creating custom shapes and sizes for the virtual objects). A preview application as described above, and explained in detail further below, effectively create a three-dimensional canvas that the developer can use as a content creation tool driven by data provided by the application being previewed.
In one or more examples, a user previewing a content item according to the examples described above will want to understand how the content item will be visualized when it is placed within a three-dimensional environment generated by a content application. For instance, depending on the size of the content item, as well as the orientation/location of the object, at least a portion of the content item may fall outside of a three-dimensional environment, thus making the portion of the content not visible to the user while the user is interacting with the three-dimensional environment. While editing the content item or adjusting the location of the content item in the three-dimensional environment, the user may want to understand how any edits being applied to a content item, will affect the user's perception of the item when the content item is rendered in a three-dimensional environment and may also understand the size of the content item with respect to the three-dimensional environment it will be rendered in. The size/volume of a three-dimensional environment can vary depending on the application that is rendering the three-dimensional environment, as well as the device that the application is running on. For instance, a particular XR content application may only be allocated a portion of the XR environment while it is operating. Thus, in one or more examples, the content items associated with a XR application may be limited to the size of the application environment rather than the size of the total XR environment. Any portion of the content item exceeding the bounds of the application environment (even though it may be smaller than the total bounds of the XR environment) may be “clipped” (i.e., not displayed). Thus, in one or more examples, in order to provide an enhanced and improved user experience as described above, the preview application described above includes one or more user interfaces (described in further detail below) that allow for a user to visualize how the user will perceive the content item when the content item is placed into a given three-dimensional environment associated with a XR application.
In one or more examples, the user interface presented in
In one or more examples, the menu 406 of user interface 400 includes a selectable option 410 for overlaying a three-dimensional bounding volume (e.g., a bounding box) on the displayed content items 402 and 404. As described in further below, when the device receives an indication from the user (e.g., by selection of the selectable option 410) to display a three-dimensional bounding volume in the content preview, the device generates a three-dimensional bounding volume to illustrate how the content item will fit into a given three-dimensional environment. The size of the bounding volume is based on information received from a content application about the size of a three-dimensional environment in which the content item being previewed will be placed into. As discussed in further detail below, placing a bounding volume along with the content item in a preview, allows the user to readily ascertain how the content item (in its current orientation, location, and size) will be perceived in three-dimensional environment.
In one or more examples, the menu 406 of user interface 400 includes a selectable option 412 (labeled “clip” in the figure) that when selected causes the device to apply one or more visual attributes to any portions of a content item that will be “clipped” (i.e., not visible within the three-dimensional environment). As discussed in further detail below. “clipping” the content item in the preview user interface can be done alternatively or in addition to applying a three-dimensional bounding volume to the content preview, thereby providing the user of the electronic device with multiple options to preview how a virtual object (e.g., content item) will appear within a given three-dimensional environment.
In one or more examples, the dimensions of the bounding volume are determined from size information received at the electronic device from a content application associated with the virtual object being previewed (e.g., granted by the operating system of the electronic device). In one or more examples, the size information includes information about the height, width, and length of an application displayed in the three-dimensional environment. The size of an application in the three-dimensional environment is dependent on many factors including but not limited, the type of application displayed in the three-dimensional environment, the type of electronic device used to display the three-dimensional application, other object (real or virtual in the three-dimensional environment, etc. In one or more examples, in addition to or alternatively to receiving size information from a content application associated with the virtual object being previewed, the user of the preview application (e.g., the developer) can specify a bounding volume/size that can be used determine how the virtual object associated with an application would be constrained under different sizes (e.g., bounding volumes). In one or more examples, the user can specify the bounding volume using one or more user interfaces associated with the preview application. For instance, the user can enter size information manually into a user interface, and the preview application can use the entered size information to generate a bounding volume. Additionally or alternatively, the user can specify size information by modifying the source code used to generate the content preview.
In one or more examples, the device also receives size information associated with the content item 402 and 404. In one or more examples, size information about the virtual objects includes but is not limited to the height, length, and width, of the virtual object. Additionally, size information optionally includes position information associated with a position of the virtual object within an application in a three-dimensional environment. For instance, the center of a virtual object may be offset from the center of the application within the three-dimensional environment such that the position of the virtual object will be shifted to a particular side of the application within the three-dimensional environment. Thus, when determining how a content item (e.g., virtual object) will fit within the application within the virtual environment, and whether any portion of the virtual object will not be visible in the three-dimensional virtual environment, the position of the virtual object within the application within the three-dimensional environment is taken into account.
In one or more examples, the size information used to generate the bounding volume 414 and 416 is included as part of the user interface 400. Optionally, the bounding volume 414 includes labels 418a-c, with each label corresponding to either the length, width, or height of the bounding volume. For instance, label 418a (representing the length of the bounding volume) is placed on an edge of the bounding volume corresponding to the length of the bounding volume. Similarly, label 418b (representing the height of the bounding volume) is placed on an edge of the bounding volume corresponding to the length of the bounding volume. Label 418c (representing the width of the bounding volume) is similarly placed on an edge of the bounding volume corresponding to the width of the bounding volume. Each label 418a-c optionally includes a numeric value corresponding to the dimension the label represents. For instance, label 418a includes the numeric value of the length of the bounding volume, 418b the numeric value of the height of the bounding volume, and 418c the numeric value of the width of the bounding volume.
As illustrated in
In one or more examples, viewing the content item and the bounding volume from multiple perspectives in the same user interface reveals additional portions of the content item that are outside of the bounding volume and thus will not be displayed in the three-dimensional environment associated with the content application. For instance, and as illustrated in
In one or more examples, rather than or in addition to using a bounding volume to determine which portions of a content item will be clipped (i.e., not visible in the three-dimensional environment due to exceeding the bounds of the three-dimensional environment), the user interface 400 applies one or more visual indicators to the content item to delineate which portions of the content item will be visible within the three-dimensional environment associated with the content application, and which portions of the content item will be clipped and thus not visible.
In one or more examples, the determination of what portions of the content item to apply a visual indicator to, so as to indicate that the portion is clipped is based on comparing the size information of the content item (described above) with the size information of the three-dimensional environment (described) and determining which portions of the content item 402 and 404 exceed the bounds of the three-dimensional environment. Optionally, the device can calculate the bounding volume as described above and use the calculated bounding volume to determine which portions of the content item will be clipped, without actually displaying the bounding volume on the as illustrated in
In one or more examples, the one or more visual indicators include applying a color or highlighting to the portions of the content item. For instance, using content item 402 in the example of
In some examples, in addition to displaying one or more visual indicators (e.g., highlighting, etc.) to the portions of the content item that exceed the bounds of a bounding volume, in instances where the preview application displays the source code associated with a virtual object along with the preview (for example, see
In one or more examples, the user interface 400 includes both the three-dimensional bounding volume as well as the visual indicators in the content preview to indicate which portions of a content item will be clipped when displayed in a three-dimensional environment of the content application associated with the content item.
In one or more examples, at 504, the device receives three-dimensional size and location information associated with a virtual object in of the software application. As described above, the three-dimensional size information includes (but is not limited to) information regarding the size, length, and width of the object (including information about the contours of the object) as well as location information (i.e., where inside of the three-dimensional environment of the software application the object is located). In one or more examples, and in response to a user input on a user interface (as described above with respect to
In one or more examples, at 508, the device displays, via the display generation component, and in accordance with the obtained three-dimensional size information associated with the three-dimensional environment, a bounding volume concurrently with the three-representation of the virtual object. As described above, the bounding volume is positioned on the user interface to illustrate the size and position relationship between the virtual object being previewed and the bounding volume to allow for the user to better ascertain which portions of the virtual object will be clipped when the object is rendered in a three-dimensional environment of a content application associated with the three-dimensional object.
In one or more examples, a user previewing a content item according to the examples described above will want to modify an aspect of the content item by modifying the application code associated with the content item and then compare the modified content item with the original (pre-modification) content item (e.g., corresponding to the original, pre-modification application code). Additionally, the user will want visually compare multiple “snapshots” of previews a content item, with each snapshot representing a separate and distinct version of the content item corresponding to distinct application code. In some circumstances, in an instance where a user may be dissatisfied with a modification of a content item (based on its visual appearance), the user may want the ability to restore the content item to a previous version prior to the application of one or more modifications. Thus, in one or more examples, in order to provide an enhanced and improved user experience as described above the preview application described above includes one or more user interfaces (described in further detail below) that allow for a user to visualize and compare modifications to a content item and restore the content item to previous states if a modification that was made is no longer desired.
In one or more examples, a virtual object that is part of the application and is part of the source code 614 is visually previewed at one or more “viewports” such as viewports 602, 604, and 606 shown in
In one or more examples, a first viewport of the one or more viewports of user interface 600 can be configured to operate as a primary viewport. A “primary” viewport can refer to a viewport that is configured to provide a visual preview of the virtual object in the source based on the current status of the source code. Thus, in one or more examples, if the virtual object in the source code 614 is modified by the user, the visual content preview of the primary viewport is automatically updated (in real-time or near real-time, such as less than 100 ms) to reflect the modification made to the source code. In the example of user interface 600 of
In one or more examples, primary viewport 602, as well viewports 604 and 606, include a selectable option 608b, 610d, and 612d, respectively, for changing a perspective of the visual preview of the viewport. For instance, in response to detecting user selection of selectable option 608b, the device initiates a process to change the visual perspective of content preview 608a such that the user views the visual preview from a different perspective (e.g., a top view, side view, a bottom view, etc.). Additionally or alternatively, the device in response to detection selection of selectable option 608b, optionally allows the user to manually change the perspective by, for instance, clicking and dragging the visual preview to rotate the view of the content preview. The examples of
In one or more examples, viewports 604 and 606 (e.g., non-primary viewports) include one or more selectable options 610d and 612d respectively that allow the user to take a “snapshot” of a code modification as described in further detail below. Selectable options 610b and 612b, are configured to allow the user to toggle between a “play” mode and a “pause” mode. In the example of
In one or more examples, as illustrated at
In the example of
In the example of
In one or more examples, in the example of
In one or more examples, and as illustrated at
In some examples, the process continues at 706 wherein one or more modifications to the obtained application code are received (e.g., the source code 614 shown on the left side in
Therefore, according to the above, some examples of the disclosure are directed to a method comprising: at an electronic device in communication with a display and one or more input devices: obtaining, from an operating system, three-dimensional size information associated with a software application for a three-dimensional environment that has been granted to the software application by the operating system, obtaining, from the software application, three-dimensional size and location information associated with a virtual object of the software application, displaying, via the display and in accordance with the three-dimensional size and location information associated with the virtual object, a representation of the virtual object, and displaying, via the display and in accordance with the three-dimensional size information associated with the software application, a representation of a bounding volume for the software application concurrently with the representation of the virtual object.
Optionally the method further comprises: determining, based on the obtained three-dimensional size and location information associated with the virtual object and based on the three-dimensional size information associated with software application that a portion of the virtual object is outside of the bounding volume, and in accordance with the determination that at least a portion of the virtual object is outside of the bounding volume, displaying the representation of the virtual object with the displayed bounding volume such that the portion of the representation of the virtual object is outside of the bounding volume.
Optionally, the method further comprises: while displaying the representation of the virtual object such that the portion of the representation of the virtual object is outside of the bounding volume, receiving, via the one or more input devices, an indication to apply one or more visual attributes at the portion of the representation of the virtual object that is outside of the bounding volume, and in response to the received input corresponding to a request to apply the one or more visual attributes, displaying one or more visual attributes at the portion of the representation of the virtual object that is outside of the bounding volume.
Optionally, the one or more visual attributes are configured to visually distinguish the portion of the representation of the virtual object that is outside of the bounding volume from one or more portions of the representation of the virtual object that are inside of the bounding volume.
Optionally, the method further comprises: determining, based on the size and location information associated with the virtual object and based on the size information associated with the software application that the virtual object fits within the bounding volume, and in accordance with the determination that the virtual object fits within bounding volume, displaying the representation of the virtual object within the displayed bounding volume.
Optionally, the method further comprises: while displaying the representation of the virtual object, and while displaying the bounding volume, receiving, via the one or more input devices, an input corresponding to a selection to remove the bounding volume from being displayed, and in response to receiving the first input, ceasing display of the bounding volume.
Optionally, the size information associated with the software application includes information associated with a width, a height, and a depth of the software application.
Optionally, the size information associated with the width, height, and depth of the software application are displayed concurrently with the bounding volume.
Optionally, displaying the bounding volume concurrently with the representation of the virtual object includes displaying a first representation of the bounding volume concurrently with a first representation of the virtual object from a first viewpoint and displaying a second representation of the bounding volume concurrently with a second representation of the virtual object from a second viewpoint, different from the first viewpoint.
Optionally, the method comprises displaying a visual indicator indicating a corresponding size of the virtual object in the three-dimensional environment relative to a size of a physical object that is associated with the virtual object.
In one or more examples, a method comprises: at an electronic device in communication with a display generation component and one or more input devices: obtaining three-dimensional size information associated with a software application for a three-dimensional environment, obtaining three-dimensional size and location information associated with a virtual object for the software application, displaying, via the display and in accordance with the three-dimensional size and location information associated with the virtual object, a representation of the virtual object, determining a bounding volume of the software application based on the three-dimensional size information associated the software application, determining that one or more portions of the representation of the virtual object are outside of the determined bounding volume of the software application, and in accordance with the determination that one or more portions of the representation of the virtual object are outside of the determined bounding volume of the software application, displaying one or more visual attributes at the one more portions of the representation of the virtual object that are outside of the determined bounding volume.
In one or more examples, a method comprises: at an electronic device in communication with a display generation component and one or more input devices: obtaining application code associated with a virtual object, while displaying a preview user interface via the display generation component, wherein the preview interface includes a first preview viewport portion, displaying a first preview of the virtual object based on the received application code and concurrently displaying a second preview of the virtual object at a second preview viewport portion of the preview interface, obtaining one or more modifications to the received application code associated with the virtual object, updating the displayed preview of the virtual object at the first preview viewport portion based on the received one or more modifications of the application code, and in accordance with a determination that the second preview viewport portion is in a first mode, forgoing updating of the second preview of the virtual object at the second preview viewport portion based on the obtained one or more modification of the application code.
Optionally, the method further comprises: in accordance with a determination that the second preview viewport portion is in a second mode, updating the second preview of the virtual object at the second preview viewport portion based on the received one or more modifications of the received application code.
Optionally, the method further comprises: in accordance with accordance with a determination that the second preview viewport is in the second mode, storing an application code snapshot associated with the virtual object displayed at the second preview viewport portion.
Optionally, the method further comprises: while the second preview viewport is in the first mode: receiving via the one or more input devices, a first input at the second preview viewport to restore the preview of the virtual object displayed at the second preview viewport, in response to receiving the first input, updating the preview of the virtual object displayed at the first preview viewport based on the stored application code snapshot associated with the virtual object displayed at the second preview viewport portion.
Optionally, the method further comprises: while the second preview viewport is in the first mode: receiving via the one or more input devices, a second input at the second preview viewport to change the second preview viewport from the first mode to the second mode, and in response to receiving the second input, updating the second preview of the virtual object displayed at the second preview viewport portion based on the received one or more modifications of the received application code.
Optionally, the method further comprises: receiving via the one or more input devices, a third input at the second preview viewport to change an orientation of the second preview of the virtual object, and in response to receiving the third input, updating the orientation of the second preview of the virtual object displayed at the second preview viewport portion.
Optionally, the method further comprises: obtaining three-dimensional size information associated with a three-dimensional environment generated by a software application associated with the application code, and displaying, at the second preview viewport portion, and in accordance with the received three dimensional size information associated the three-dimensional environment, a three-dimensional bounding volume concurrently with the second preview of the virtual object.
Some examples of the disclosure are directed to an electronic device. The electronic device can comprise: one or more processors; memory; and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the above methods.
Some examples of the disclosure are directed to a non-transitory computer readable storage medium. The non-transitory computer readable storage medium can store one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform any of the above methods.
Some examples of the disclosure are directed to an electronic device. The electronic device can comprise: one or more processors; memory; and means for performing any of the above methods.
Some examples of the disclosure are directed to an information processing apparatus for use in an electronic device. The information processing apparatus can comprise means for performing any of the above methods.
The foregoing description, for purpose of explanation, has been described with reference to specific examples. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The examples were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best use the invention and various described examples with various modifications as are suited to the particular use contemplated.
This application claims the benefit of U.S. Provisional Application No. 63/506,111, filed Jun. 4, 2023, the content of which is herein incorporated by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
63506111 | Jun 2023 | US |