360-degree video offers immersive video experiences for virtual reality (VR) system users. However, due to the increased immersion provided by the 360-degree video format, transitions between virtual environments that may occur when using a 360-degree video application program may be jarring for users. Individual 360-degree videos are often short, and users may often watch several 360-degree videos consecutively. In existing 360-degree video application programs, the user returns to a home virtual environment at the end of each video. The number of transitions in a single viewing session may therefore be large.
According to one aspect of the present disclosure, a head-mounted display device is provided, comprising a display, one or more input devices, and a processor. The processor may be configured to display a first 360-degree video on the display in a three-dimensional playback environment. The processor may be further configured to display a post-roll on the display when the first 360-degree video ends, wherein the post-roll is displayed in the three-dimensional playback environment and includes one or more interactable icons. The processor may be further configured to detect a selection of an interactable icon of the one or more interactable icons via the one or more input devices. The processor may be further configured to, in response to detecting the selection, perform a video environment navigation action.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
In view of the problem discussed above, the inventors have developed a system for reducing the number of virtual environment transitions that occur during a 360-degree video viewing session. This system is disclosed in the example embodiments herein.
For example, the head-mounted display device 10 may include an image production system 14 that is configured to display virtual objects to the user with the display 12. In the augmented reality configuration with an at least partially see-through display, the virtual objects are visually superimposed onto the physical environment that is visible through the display 12 so as to be perceived at various depths and locations. In the virtual reality configuration, the image production system 14 may be configured to display virtual objects to the user with the non-see-through stereoscopic display, such that the virtual objects are perceived to be at various depths and locations relative to one another. In one embodiment, the head-mounted display device 10 may use stereoscopy to visually place a virtual object at a desired depth by displaying separate images of the virtual object to both of the user's eyes. Using this stereoscopy technique, the head-mounted display device 10 may control the displayed images of the virtual objects, such that the user will perceive that the virtual objects exist at a desired depth and location in the viewed physical environment. In one example, the virtual object may be a cursor that is displayed to the user, such that the cursor appears to the user to be located at a desired location in the virtual three-dimensional environment. In the augmented reality configuration, the virtual object may be a holographic cursor that is displayed to the user, such that the holographic cursor appears to the user to be located at a desired location in the real world physical environment.
The head-mounted display device 10 may include one or more input devices with which the user may input information. The user input devices may include one or more optical sensors and one or more position sensors, which are discussed in further detail below. Additionally or alternatively, the user input devices may include one or more buttons, control sticks, microphones, touch-sensitive input devices, or other types of input devices.
The head-mounted display device 10 includes an optical sensor system 16 that may include one or more optical sensors. In one example, the optical sensor system 16 includes an outward-facing optical sensor 18 that may be configured to detect the real-world background from a similar vantage point (e.g., line of sight) as observed by the user through the display 12 in an augmented reality configuration. The optical sensor system 16 may additionally include an inward-facing optical sensor 20 that may be configured to detect a gaze direction of the user's eye. It will be appreciated that the outward facing optical sensor 18 may include one or more component sensors, including an RGB camera and a depth camera. The RGB camera may be a high definition camera or have another resolution. The depth camera may be configured to project non-visible light and capture reflections of the projected light, and based thereon, generate an image comprised of measured depth data for each pixel in the image. This depth data may be combined with color information from the image captured by the RGB camera, into a single image representation including both color data and depth data, if desired. In a virtual reality configuration, the color and depth data captured by the optical sensor system 16 may be used to perform surface reconstruction and generate a virtual model of the real-world background that may be displayed to the user via the display 12. Alternatively, the image data captured by the optical sensor system 16 may be directly presented as image data to the user on the display 12.
The head-mounted display device 10 may further include a position sensor system 22 that may include one or more position sensors such as accelerometer(s), gyroscope(s), magnetometer(s), global positioning system(s), multilateration tracker(s), and/or other sensors that output position sensor information useable as a position, orientation, and/or movement of the relevant sensor.
Optical sensor information received from the optical sensor system 16 and/or position sensor information received from position sensor system 22 may be used to assess a position and orientation of the vantage point of head-mounted display device 10 relative to other environmental objects. In some embodiments, the position and orientation of the vantage point may be characterized with six degrees of freedom (e.g., world-space X, Y, Z, pitch, roll, yaw). The vantage point may be characterized globally or independent of the real-world background. The position and/or orientation may be determined with an on-board computing system (e.g., on-board computing system 24) and/or an off-board computing system, which may at least one processor 24A and/or at least one memory unit 24B.
Furthermore, the optical sensor information and the position sensor information may be used by a computing system to perform analysis of the real-world background, such as depth analysis, surface reconstruction, environmental color and lighting analysis, or other suitable operations. In particular, the optical and positional sensor information may be used to create a virtual model of the real-world background. In some embodiments, the position and orientation of the vantage point may be characterized relative to this virtual space. Moreover, the virtual model may be used to determine positions of virtual objects in the virtual space and add additional virtual objects to be displayed to the user at a desired depth and location within the virtual world.
Additionally, the optical sensor information received from the optical sensor system 16 may be used to identify and track objects in the field of view of optical sensor system 16. For example, depth data captured by optical sensor system 16 may be used to identify and track motion of a user's hand. The tracked motion may include movement of the user's hand in three-dimensional space, and may be characterized with six degrees of freedom (e.g., world-space X, Y, Z, pitch, roll, yaw). The tracked motion may also be used to identify and track a hand gesture made by the user's hand. For example, one identifiable hand gesture may be moving a forefinger upwards or downwards. It will be appreciated that other methods may be used to identify and track motion of the user's hand. For example, optical tags may be placed at known locations on the user's hand or a glove worn by the user, and the optical tags may be tracked through the image data captured by optical sensor system 16.
It will be appreciated that the following examples and methods may be applied to both a virtual reality and an augmented reality configuration of the head-mounted display device 10. In a virtual reality configuration, the display 12 of the head-mounted display device 10 is a non-see-through display, and the three-dimensional environment is a virtual environment displayed to the user. The virtual environment may be a virtual model generated based on image data captured of the real-world background by optical sensor system 16 of the head-mounted display device 10. Additionally, a cursor having a modifiable visual appearance is also displayed to the user on the display 12 as having a virtual location within the three-dimensional environment. In an augmented reality configuration, the cursor is a holographic cursor that is displayed on an at least partially see-through display, such that the cursor appears to be superimposed onto the physical environment being viewed by the user.
When the head-mounted display device 10 is in a virtual reality configuration, processor 24A of the head-mounted display device 10 may be configured to display a 360-degree video on the display 12.
The three-dimensional playback environment 28 including the post-roll 30 is shown in greater detail in
In the example of
In response to the selection of an interactable icon, if the interactable icon is a preview image of an additional 360-degree video, the video environment navigation action may include displaying the additional 360-degree video on the display 12. The additional 360-degree video may be displayed without returning to a three-dimensional virtual home environment or menu screen. Instead, the processor 24A may be configured to continue to display the three-dimensional playback environment 28 when the additional 360-degree video is displayed. The number of transitions between three-dimensional virtual environments that occur in one session of 360-degree video viewing may thereby be reduced.
The video environment navigation action performed in response to the selection of an interactable icon may include launching an application program. Examples of launching an application program in response to the selection of an interactable icon are shown in
In some embodiments, the processor 24A may determine whether the application program 60 specified by the selected interactable icon is installed on the one or more memory units 24B of the head-mounted display device 10. If the application program 60 indicated by the interactable icon is already installed on the one or more memory units 24B, the processor 24A may be configured to launch the application program 60. If the application program is not installed, the processor 24A may be configured to launch an application store program 60B, which may include an option 62 to buy the application program 60. Alternatively, the processor 24A may display an error message or perform some other video environment navigation action. In the example of
In some embodiments, the application program 60 may include an option 64 to purchase at least one of the first 360-degree video 26 and another 360-degree video. For example, the processor 24A may launch the web browser 60A and navigate to a webpage that includes such an option 64 in response to selection of the “Buy Video” button 42B.
Returning to
When the “Refresh” interactable icon 54B is selected, the video environment navigation action may include refreshing the post-roll 30. When the post-roll 30 is refreshed, at least one new interactable icon may be displayed in the three-dimensional playback environment 28. In addition, at least one interactable icon may be removed from the three-dimensional playback environment 28. For example, the at least one new interactable icon may be a preview for a 360-degree video not displayed before the “Refresh” interactable icon 54B is selected, and may replace one of the previews 42, 44, 46, 48, 50, and 52. A user who does not desire to watch any of the 360-degree videos previewed in the post-roll 30 may therefore refresh the post-roll 30 in order to view previews of other 360-degree videos. In some embodiments, the at least one new interactable icon may be determined based on one or more filtering criteria. For example, the one or more filtering criteria may be entered by the user as one or more search terms, or may be determined based on one or more 360-degree videos previously watched by the user.
When the “Exit” interactable icon 56 is selected, the video environment navigation action may include exiting the three-dimensional playback environment 28. Subsequently to exiting the three-dimensional playback environment 28, the processor 24A may be further configured to display a three-dimensional virtual home environment or menu.
In some embodiments, one or more of the “Replay” interactable icon 54A, the “Refresh” interactable icon 54B, and the “Exit” interactable icon 56 may be displayed at a depth different from the depth at which the previews 42, 44, 46, 48, 50, and 52 of the additional 360-degree videos are displayed. Thus, the “Replay” interactable icon 54A, the “Refresh” interactable icon 54B, and the “Exit” interactable icon 56 may be made more easily distinguishable from the other interactable icons included in the post-roll 30.
In the embodiment of
In embodiments in which the head-mounted display device 10 includes a position sensor system 22, the processor 24A may be further configured to receive a position sensor input that indicates movement of the head-mounted display device 10 in a physical environment 70, as shown in
In some embodiments, characteristics of an interactable icon included in the post-roll 30 may be specified by a content provider, as shown in
The one or more icon parameters 82 may indicate at least one of a position 84 and an appearance 86 of the interactable icon. The appearance 86 of the interactable icon may include, for example, a depth, color, brightness, and/or image displayed as part of the interactable icon. Subsequently to receiving the one or more icon parameters 82 from the server computing device 80, the processor 24A may be configured to display the interactable icon based at least in part on the one or more icon parameters 82.
The one or more icon parameters 82 may also indicate the video environment navigation action 88 performed when the processor 24A detects the selection of the interactable icon. When the video environment navigation action 88 includes launching an application program 60, the video environment navigation action 88 specified in the one or more icon parameters 82 may indicate the application program 60. When the application program 60 is a web browser 60A, the video environment navigation action 88 specified in the one or more icon parameters 82 may include a web address of a webpage to which the processor 24A is configured to navigate upon launching the web browser 60A.
At step 106, the method 100 may further include detecting a selection of an interactable icon of the one or more interactable icons via one or more input devices. Detecting the selection of the interactable icon may include detecting, for example, a gaze input, a gesture input, a button press or touch input on the head-mounted display device or an associated controller device, or some other form of input.
At step 108, in response to detecting the selection of the interactable icon, the method 100 may further include performing a video environment navigation action. Steps 110, 112, 114, 116, 118, and 120 are example video environment navigation actions that may be performed as part of step 108. At step 110, in embodiments in which the selected interactable icon of the one or more interactable icons is a preview image of a second 360-degree video, performing the video environment navigation action may include displaying the second 360-degree video on the display. At step 112, performing the video environment navigation action may include exiting the three-dimensional playback environment. In embodiments in which step 112 is performed, performing the video environment navigation action may further include, at step 114, displaying a three-dimensional virtual home environment or menu. At step 116, performing the video environment navigation action may include launching an application program. In some embodiments, the application program may be a web browser. In such embodiments, launching the web browser may include navigating to a webpage specified by the interactable icon. In some embodiments, if the application program indicated by the interactable icon is not installed on the head-mounted display device, performing the video environment navigation action may include launching an application store program. At step 120A, performing the video environment navigation action may include replaying the first 360-degree video, for example, when the interactable icon is a “Replay” button. At step 120B, performing the video environment navigation action may include refreshing the post-roll, for example, when the interactable icon is a “Refresh” button.
In the examples provided above, the post-roll is displayed when the 360-degree video ends. However, instead of a post-roll displayed at the end of a 360-degree video, a mid-roll may be displayed partway through the 360-degree video. For example, when playing a long video, the processor may be configured to display a mid-roll during an intermission. In such embodiments, a visual effect such as blurring may be applied to an intermediate frame rather than the last frame of the 360-degree video when the mid-roll is displayed.
Although, in the examples provided in
In some embodiments, the methods and processes described herein may be tied to a computing system of one or more computing devices. In particular, such methods and processes may be implemented as a computer-application program or service, an application-programming interface (API), a library, and/or other computer-program product.
Computing system 200 includes a logic processor 204, volatile memory 208, and a non-volatile storage device 212. Computing system 200 may optionally include a display subsystem 216, input subsystem 220, communication subsystem 224, and/or other components not shown in
Logic processor 204 includes one or more physical devices configured to execute instructions. For example, the logic processor may be configured to execute instructions that are part of one or more applications, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
The logic processor may include one or more physical processors (hardware) configured to execute software instructions. Additionally or alternatively, the logic processor may include one or more hardware logic circuits or firmware devices configured to execute hardware-implemented logic or firmware instructions. Processors of the logic processor 204 may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic processor optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic processor may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration. In such a case, these virtualized aspects may be run on different physical logic processors of various different machines.
Volatile memory 208 may include physical devices that include random access memory. Volatile memory 208 is typically utilized by logic processor 204 to temporarily store information during processing of software instructions. It will be appreciated that volatile memory 208 typically does not continue to store instructions when power is cut to the volatile memory 208.
Non-volatile storage device 212 includes one or more physical devices configured to hold instructions executable by the logic processors to implement the methods and processes described herein. When such methods and processes are implemented, the state of non-volatile storage device 212 may be transformed—e.g., to hold different data.
Non-volatile storage device 212 may include physical devices that are removable and/or built-in. Non-volatile storage device 212 may include optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., ROM, EPROM, EEPROM, FLASH memory, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), or other mass storage device technology. Non-volatile storage device 212 may include nonvolatile, dynamic, static, read/write, read-only, sequential-access, location-addressable, file-addressable, and/or content-addressable devices. It will be appreciated that non-volatile storage device 212 is configured to hold instructions even when power is cut to the non-volatile storage device 212.
Aspects of logic processor 204, volatile memory 208, and non-volatile storage device 212 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
The term “program” may be used to describe an aspect of computing system 200 implemented to perform a particular function. In some cases, a program may be instantiated via logic processor 204 executing instructions held by non-volatile storage device 212, using portions of volatile memory 208. It will be understood that different programs may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same program may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The term “program” encompasses individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc.
When included, display subsystem 216 may be used to present a visual representation of data held by non-volatile storage device 212. As the herein described methods and processes change the data held by the non-volatile storage device, and thus transform the state of the non-volatile storage device, the state of display subsystem 216 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 216 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic processor 204, volatile memory 208, and/or non-volatile storage device 212 in a shared enclosure, or such display devices may be peripheral display devices.
When included, input subsystem 220 may comprise or interface with one or more user-input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection, gaze detection, and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity; and/or any other suitable sensor.
When included, communication subsystem 224 may be configured to communicatively couple computing system 200 with one or more other computing devices. Communication subsystem 224 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network, or a wired or wireless local- or wide-area network. In some embodiments, the communication subsystem may allow computing system 200 to send and/or receive messages to and/or from other devices via a network such as the Internet.
According to one aspect of the present disclosure, a head-mounted display device is provided, the head-mounted display device comprising a display, one or more input devices, and a processor. The processor may be configured to display a first 360-degree video on the display in a three-dimensional playback environment. The processor may be further configured to display a post-roll on the display when the first 360-degree video ends. The post-roll may be displayed in the three-dimensional playback environment and may include one or more interactable icons. The processor may be further configured to detect a selection of an interactable icon of the one or more interactable icons via the one or more input devices. In response to detecting the selection, the processor may be further configured to perform a video environment navigation action.
According to this aspect, the interactable icon of the one or more interactable icons may be a preview image of a second 360-degree video. The video environment navigation action may include displaying the second 360-degree video on the display.
According to this aspect, the video environment navigation action includes exiting the three-dimensional playback environment. According to this aspect, the processor may be further configured to display a three-dimensional virtual home environment subsequently to exiting the three-dimensional playback environment.
According to this aspect, the post-roll may be displayed over at least a last frame of the first 360-degree video. According to this aspect, a visual effect may be applied to the last frame of the first 360-degree video when the post-roll is displayed.
According to this aspect, the post-roll may be displayed at a fixed location within the three-dimensional playback environment.
According to this aspect, the one or more input devices may include at least one position sensor. In response to receiving a position sensor input that indicates movement of the head-mounted display device in a physical environment, the processor may be further configured to relocate the post-roll within the three-dimensional playback environment.
According to this aspect, the video environment navigation action may include launching an application program. According to this aspect, the application program may be a web browser, and launching the web browser may include navigating to a webpage specified by the interactable icon. According to this aspect, the application program may be an application store program. According to this aspect, the application program may include an option to purchase at least one of the first 360-degree video and a second 360-degree video.
According to this aspect, the video environment navigation action may include replaying the first 360-degree video.
According to this aspect, the one or more input devices may include a camera configured to track a gaze direction of a user. The selection of the interactable icon may be detected based at least in part on the gaze direction. According to this aspect, the processor may be further configured to display a cursor at a location in the three-dimensional playback environment based at least in part on the gaze direction of the user. The processor may be further configured to modify an appearance of an interactable icon overlapped by the cursor.
According to this aspect, the processor may be further configured to receive one or more icon parameters of the interactable icon of the one or more interactable icons from a server computing device. The processor may be further configured to display the interactable icon based at least in part on the one or more icon parameters. The one or more icon parameters may indicate at least one of a position and an appearance of the interactable icon. According to this aspect, the one or more icon parameters may indicate the video environment navigation action performed when the selection of the interactable icon is detected.
According to another aspect of the present disclosure, a method for use with head-mounted display device is provided, comprising displaying a first 360-degree video on a display in a three-dimensional playback environment. The method may further comprise displaying a post-roll on the display when the first 360-degree video ends. The post-roll may be displayed in the three-dimensional playback environment and may include one or more interactable icons. The method may further comprise detecting a selection of an interactable icon of the one or more interactable icons via one or more input devices. In response to detecting the selection, the method may further comprise performing a video environment navigation action.
According to this aspect, the interactable icon of the one or more interactable icons may be a preview image of a second 360-degree video. Performing the video environment navigation action may include displaying the second 360-degree video on the display.
According to another aspect of the present disclosure, a head-mounted display device is provided, the head-mounted display device comprising a display, one or more input devices, and a processor. The processor may be configured to display a first 360-degree video on the display in a three-dimensional playback environment. The processor may be further configured to display one or more interactable icons on the display in the three-dimensional playback environment. The one or more interactable icons may include at least a preview image of a second 360-degree video. The processor may be further configured to detect a selection of an interactable icon of the one or more interactable icons via the one or more input devices. In response to detecting the selection, the processor may be further configured to perform a video environment navigation action.
It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.