The present application relates to head-mounted display devices, and more specifically to systems and methods for generating display views tracking user head movement.
Approaches described in this section should not be assumed to qualify as prior art merely by virtue of their inclusion therein.
Conventional systems for Virtual Reality (VR) and/or Augmented Reality (AR) typically have a conventional arrangement that includes a conventional video source device coupled to a conventional head-mounted display (HMD) device mounted to a user's head. These conventional systems may detect user head movement in the HMD device, utilize the video source device to compute and adjust the display view based on the movement, and then send this adjusted display view from the video source device to the HMD device to provide displays to each of the user's eyes. These conventional systems place substantial demands on the video source device (e.g., typically on its graphics processing unit (GPU)) in terms of performance and power, and, at the same time, may not provide acceptable latency for the user when the user moves his or her head while using the HMD device.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is intended to be used as an aid in determining the scope of the claimed subject matter.
The systems and methods according to various embodiments of the present technology may effectively reduce rendering latency of the display view and reduce the performance requirement for the GPU on the video source device side to achieve desirable virtual reality and/or augmented reality user experiences.
Disclosed are systems and methods for generating display views that track head movement of a user of a head-mounted display (HMD) device. In some embodiments, a method for generating a display view for an HMD device includes receiving, at the HMD device, a display image from a source device. The method may include, in response to movement of the head of a user wearing the HMD device, generating, at the HMD device, video offset data via at least one motion sensor in the HMD device. The video offset data may be applied, at the HMD device, to the display image to generate the display view. In various embodiments, the display view is smaller than, and a subset of, the display image. The method may further include presenting the display view to the user (or configuring the display view for visual presentation to the user).
In certain embodiments, a method for generating a display image for an HMD device includes generating, by a source device, a display image for the HMD device. The method may include sending, from a source device, a first display image having a first display image boundary to the HMD device, and receiving, at the source device, movement data from the HMD device in response to movement of the head of a user wearing the HMD device. The method may also include determining, at the source device, that based on the data, the movement would cause the user's view to be outside of the display image boundary. In response to the determination, the method may further include generating, at the source device, a second display image having a second, different display image boundary. The method may further include sending, from the source device, the second display image to the HMD device.
In some embodiments, an HMD device wearable by a user includes at least one motion sensor configured to generate video offset data in response to movement of the head of the user; and a pixel data generator that receives a display image from a source device and generates a display view (e.g., in the form of pixel data) from both the display image and the video offset data, the display view being smaller than, and a subset of, the display image. In various embodiments, the HMD also includes circuitry for configuring the display view for viewing by the user wearing the HMD device.
The systems and methods may generate the display view, which tracks the head movement of a user when wearing the HMD device. The systems and methods may generate the new display views for left-eye and right-eye displays by applying the video offset on horizontal and vertical directions of the incoming display image, and extracting the correct display views from the display image. Accordingly, the systems and methods can effectively reduce rendering latency for the display views, and lower the performance and power requirements for a graphics processing unit (GPU) of the source device for achieving desirable VR and AR user experiences.
Other example embodiments of the disclosure and aspects will become apparent from the following description taken in conjunction with the following drawings.
Embodiments are illustrated by way of example and not limitation in the figures of the drawings, in which like references indicate similar elements.
The following detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show illustrations in accordance with example embodiments. These example embodiments, which are also referred to herein as “examples,” are described in enough detail to enable those skilled in the art to practice the present subject matter. The embodiments can be combined, other embodiments can be utilized, or structural, logical and electrical changes can be made without departing from the scope of what is claimed. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope is defined by the appended claims and their equivalents.
The technology disclosed herein relates to systems and methods for generating a display view that tracks head movement of a user when the user is wearing a head-mounted display (HMD) device. In various embodiments, the systems and methods provide for generation of left-eye and right-eye display views for left-eye and right-eye displays, respectively, by applying a video offset on horizontal and/or vertical directions of the incoming display image and extracting the display views from the display image.
Other systems and method may track user head movement in the HMD device, and use the video source device to adjust the display view. These other systems place substantial demands on the graphics processing unit (GPU) in the video source device to adjust the display view in the video source device before the adjusted display view is sent to the HMD device. For these other systems, this can result in undesirable latency being experience by the user when the user moves his or her head, while at the same time requiring more power and performance for the video display source and its GPU. As a result, these other systems can exhibit undesirable latency, undesirable power demands and provide an undesirable user experience when the user moves the user's head. The systems and methods according to various embodiments can effectively reduce rendering latency of the display view and reduce the performance requirement for the GPU on the source side to achieve desirable VR/AR user experiences.
In general, in order to provide a very desirable VR/AR user experience, there should be low image rendering latency after the user moves his or her head (normally the low image rendering latency should be less than 20 ms to avoid the user experiencing undesirable lag in the update of the image), a high video refresh rate (should be greater than 60 hz to avoid flicker), and a high video resolution on each eye (should be greater than Full High Definition (FHD) 1080p resolution). Each of these factors puts a high demand on GPU performance, and hence, raises the cost of the GPU and, in turn, VR/AR capable mobile devices, like smartphones or tablets, and personal computing devices.
The pixel data generator 101 may receive video data from the video source device 100. The pixel data generator 101 may provide a video data output to the driver ICs 102, which, in turn, may drive the display panels 103 to distribute output video data to a left-eye display and a right-eye display respectively. In some embodiments, the interface between the driver IC 102 and the respective display panel 103 is Mobile Industry Processor Interface (MIPI). However, it is to be understood that the interface may be other suitable valid protocols.
In the example in
For the example in
The approach in the examples in
In one or more embodiments, instead of sending only the display view each time a screen image is refreshed, the source device 300 sends the whole display image, or a partial image, which is larger than the actual display view. The determining of whether to send the whole display image or partial image may depend on the available bandwidth of the link between the source device 300 and the pixel data generator 301, as well as the processing power of the GPU of the source device 300. In various embodiments, the pixel data generator 301 receives the display image from the source device 300. The pixel data generator 301 may then generate the display view, at least in part, from the display image, as will be explained in further detail below.
Although 304 is referred to as motion sensors, there may be one or more motion sensors for those elements. The motion sensors 304 may include one or more accelerometers, gyroscopes, or other suitable sensors that detect motion, in addition to at least one processor coupled to memory. The motion sensors 304 differ from the motions sensors 104 in the example in
In various embodiments, the pixel data generator 301 differs from the pixel data generator 101 in various respects. The pixel data generator 301 may be configured to receive the video offset data from the motion sensors 304. The video offset data may be transferred physically from the motion sensors 304 to the pixel data generator 301 through a digital interface, e.g., a Serial Peripheral Interface (SPI) interface or a Universal Serial Bus (USB) interface, to name a few. The pixel data generator 301 may also be configured to apply the video offset data to the received display image, and to generate the new display view therefrom. In various embodiments, the pixel data generator 301 (also referred to as the generator herein) provides pixel data, for the new display view, to the driver ICs 102, which, in turn, drive the display panels 103 to provide output video data to a left-eye display and a right-eye display respectively, for viewing by a user.
In the example in
In various embodiments, the motion sensors 304 send the movement data to the source device 300, which, in turn, determines if such movement would cause the user's view to exceed the boundary 401a-401d of the previously transferred display image 401. In various embodiments, the movement data sent to the source device 300 may include, for example, the video offset data, the change in raw motion sensor data, or the absolute raw motion sensor data. The kind of data may depend on the agreement between the source device and HMD for the particular implementation. In various embodiments, if the movement would cause the view to exceed the previously transferred display image boundary 401a-401d, the source device 300, generates a new display image with new display image boundaries and transfers the new display image to the pixel data generator 301. Otherwise, if the movement of the user's head that the motion sensors 304 detect would result in the user's view remaining within the previously transferred display image boundary 401a-401d, the source device 300 does not need to generate a new display image based on the user's head movement, saving resources of the source device 300.
In other embodiments, the motion sensors 304 (or alternatively the pixel data generator 301) may determine if such movement would cause the user's view to exceed the boundary 401a-401d of the previously transferred display image 401. Based on the determination that the movement would cause the view to exceed the previously transferred display image boundary 401a-401d, the motion sensors 304 (or alternatively the pixel display generator 301) sends a request to the source device 300 for a new display image. The request may include movement data, such as the video offset data, the change in raw motion sensor data, or the absolute raw motion sensor data. In response to the request, the source device 300 generates a new display image with new display image boundaries and transfers the new display image to the pixel data generator 301.
In certain embodiments, the source device 300 generates and transfers image offset data to the pixel data generator 301, the image offset data representing an offset between the previous and new display image. The pixel data generator 301 may then generate, based on the image offset data, a new display view which corresponds to the video offset data and the new display image. Alternatively, the source device 300 may generate the new display view based on the video offset data and new display image, and transfer the new display view to the pixel data generator 301.
In various embodiments, the resources saved (e.g., by having the pixel data generator 301 using video offset data generated by the motion sensors 304 in HMD 305 as in the examples in
The components shown in
Mass data storage 530, which can be implemented with a magnetic disk drive, solid state drive, or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by processor unit(s) 510. Mass data storage 530 stores the system software for implementing embodiments of the present disclosure for purposes of loading that software into main memory 520.
Portable storage device 540 operates in conjunction with a portable non-volatile storage mediums (such as a flash drive, compact disk, digital video disc, or USB storage device, to name a few) to input and output data/code to and from the computer system 500 of
User input devices 560 can provide a portion of a user interface. User input devices 560 may include one or more microphones; an alphanumeric keypad, such as a keyboard, for inputting alphanumeric and other information; or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys. User input devices 560 can also include a touchscreen. Additionally, the computer system 500 as shown in
Graphics display system 570 include a liquid crystal display (LCD) or other suitable display device. Graphics display system 570 is configurable to receive textual and graphical information and process the information for output to the display device.
Peripheral devices 580 may include any type of computer support device to add additional functionality to the computer system.
The components provided in the computer system 500 of
The processing for various embodiments may be implemented in software that is cloud-based. In some embodiments, the computer system 500 is implemented as a cloud-based computing environment. In other embodiments, the computer system 500 may itself include a cloud-based computing environment. Thus, the computer system 500, when configured as a computing cloud, may include pluralities of computing devices in various forms, as will be described in greater detail below.
In general, a cloud-based computing environment is a resource that typically combines the computational power of a large grouping of processors (such as within web servers) and/or that combines the storage capacity of a large grouping of computer memories or storage devices.
The cloud may be formed, for example, by a network of web servers that comprise a plurality of computing devices, such as the computer system 500, with each server (or at least a plurality thereof) providing processor and/or storage resources. These servers may manage workloads provided by multiple users (e.g., cloud resource customers or other users).
While the present technology is susceptible of embodiment in many different forms, there is shown in the drawings and herein described in detail several specific embodiments with the understanding that the present disclosure is to be considered as an exemplification of the principles of the technology and is not intended to limit the technology to the embodiments illustrated.
The present application claims the benefit of U.S. Provisional Application No. 62/380,961, filed Aug. 29, 2016, which is incorporated herein by reference for all purposes.
Number | Date | Country | |
---|---|---|---|
62380961 | Aug 2016 | US |