An embodiment of the present invention relates to the art of head mounted displays used to present rendered visual images to users in augmented reality (AR) and virtual reality (VR) applications.
Currently augmented reality (AR) and virtual reality (VR) devices are being manufactured and sold to customers for use in many fields such as business operations, industrial control, education, and consumer entertainment. These systems typically comprise a computer system having a main computer and graphics engine. A mesh model of a scene or game is rendered into a 3D stereoscopic visual presentation and sent, frame by frame, to a head mounted display apparatus worn by a user so as to view the rendered images.
Some limitations arise in these systems because the rendered images must, necessarily, lag behind the real time movements of the user as data from head or eye tracking means in the headsets are transmitted back to these processing systems. This delay is referred to as “latency” in these systems and must be kept as small as a few milliseconds in order to prevent disorientation and other unpleasant sensations, and to promote the illusion of real presence regarding the rendered objects. Further, added processing is often necessary to compensate for deficiencies in the display technology due to optical problems such as spherical and chromatic aberration in the lens systems.
Techniques have been developed to use the time after a frame has been rendered, as a post render process, by the graphic portion of the computer system, to “fix up” its position and/or other properties based on a new head or eye position datum or prediction thereof. This post render process is typically performed by the graphics processing unit (GPU) before the frame is shown to the user. Also, post render actions may be used to correct for optical problems by pre-distorting images in position and/or pixel color, etc. so as to present resulting images to the eyes of the users that represent the desired rendering.
There also may be input devices or other computer peripherals not shown, and the HMD may have, either video or direct, see-through capability, also not shown. In this figure the cable 103 also carries signals back to the computer from any head or eye position sensing that may be a part of HMD 104.
In the prior art it is typical for the HMD 104 to display images that have been rendered by the GPU 102, the GPU rendering in such a way as to counteract optical aberrations and other problems inherent in the specific HMD design and state of manufacture.
A simplified process flow for the prior art system of
However, in many cases the post render fixup of the images in the prior art system of
The computer system 101 may, for example, be a general purpose user computer running background programs. That is the capability of the computer system may be limited and the computer system may be multitasking. Additionally, in the case of an AR/VR program, there may be portions of the program in which the complexity of the program varies. For example, AR/VR games may vary in complexity and have portions of game play in which more processing is required to generate rendered frames.
Additionally, in many consumer applications the user may use the HMD 104 with different computing systems. For example, while some gamers use high-end computers with fast GPUs, more generally there is also a market segment of consumers that utilize computers with slower GPUs, such as mobile devices or general purpose home computers. As a result, the frame update rates may be lower than desired and there may be uneven loading, which can create an unacceptable user experience.
An apparatus, system, and method directed to a head mounted display (HMD) includes a post rendering unit (PRU) to perform one or more post rendering processing options on rendered image data received from an external computer. The PRU may be disposed within, attached to, or otherwise coupled to the HMD.
In one embodiment, the PRU reduces a latency in augmented reality (AR) and virtual reality (VR) applications to provide actions such as correction for movement of the user's head or eyes. Additionally, in one embodiment the PRU may be used when the external computer is slow or overloaded. In selected embodiments, the PRU may also perform other adaptations and corrections.
The foregoing summary, as well as the following detailed description of illustrative implementations, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the implementations, there is shown in the drawings example constructions of the implementations; however, the implementations are not limited to the specific methods and instrumentalities disclosed. In the drawings:
The computer system 301 may be a laptop computer. However, more generally, the computer system may be implemented using a computer with different capabilities, such as a smartphone 340 with graphics capability, a slate computer 350, or a desktop computer 360.
One aspect is that offloading one or more post rendering functions reduces the workload on the GPU 302 and CPU 306 to perform post rendering. This facilitates the user selecting a range of computing devices for computer system 301, thus providing advantages in portability and/or cost. Additionally, offloading the post rendering to the HMD 304 provides a possible reduction of the engineering requirements for the high bandwidth cable 303. Additionally, offloading the post rendering to the HMD 304 provides reductions in latency when the user changes their head or eye position.
The PRU 305 has access to local sensor data in the HMD, including motion tracking. The local sensor data may include other types of sensors, such as temperature, lighting conditions, machine vision, etc.
The PRU 305 may make its selection based on various inputs and sub-processes. The decisions may be triggered based on detecting minimum thresholds (e.g., a frame rate of received rendered frames from the GPU falling below a threshold frame rate; head or eye motion exceeding one or more thresholds for a current change in position or a prediction change in position). However, other rules besides a minimum threshold may be utilized. More generally, a rules engine or rules table may be used by the PRU 305 to select a post rendering processing operation based on a set of inputs.
The post rendering operations may be implemented in different ways. However, it will be understood that individual aspects may be implemented by one or more processing engines implemented as hardware, firmware, or as software code residing on a non-transitory computer readable medium. In one embodiment the PRU 305 includes a predistortion engine 319 and a fixup engine 321 to adapt a frame based on changes in user eye or head position. A frame interpolation engine 323 generates interpolated frames based on one or more conditions, such as detecting head or eye motion or detecting a slowdown in the external computer system. A video sub-windows engine 325 supports adding one or more video sub-windows to the displayed image. An image shifting engine 327 supports shifting an image in time or in position. A Z buffer adjustment engine 329 supports performing adjustments of the image based on Z-buffer depth data provided by the external computer. A local sensor data adjustment engine 331 performs adjustment of the processing based on local sensor data, which may include environmental data and machine vision data. An engine 333 may be included to specifically monitor slow or overloaded conditions of the external computer. An eye and head tracking and prediction engine 335 monitors local sensor motion sensor data, determines the current head and eye position of the user, and may also predict the user's head and eye position one or more frames ahead of its current position. A post rendering processing control 337 may be provided to receive inputs from other engines and modules and make decisions on which post rendering operations are to be performed at a given time.
The PRU 305 calculates position changes 613 and generates fixed up rendered views 615 and performs predistortion processing 617 prior to sending the processed image data to the left and right eye displays 619, 621. The PRU 305 can perform the fixup and predistortion operations more quickly than the computer system 301 for a variety of reasons. One reason is that there is a potential time savings for the PRU 305 to perform these operations because the motion tracking sensors are local to the HMD, permitting the PRU to better manage the post render fixup and predistortion processing steps in the HMD. Additionally, there is a potential time savings for the PRU to perform these operations because of the latencies associated with the cable or wireless interface delays and any time required for the CPU and GPU to respond, which may depend on how busy the computer system 301 is. In one embodiment, the PRU need not wait for complete images to be transferred to begin its work, but may begin to process these with just a few lines of video transferred, and then rapidly proceed in a pipelined method through the rest of the frame.
As an additional example, the PRU 305 may predict the head or eye position of the user and shift an image presentation based on the predicted head or eye position of the user. As another example, the post rendering unit may place or adjust an image view in a wide angle projected image based on head or eye position of the user.
The PRU 305 may perform a variety of different post rendering processing operations, depending on implementation details. For example, in displays that utilize liquid crystal on silicon (LCoS) technology, a problem occurs when user head movement causes the flashes of the different colors that make up the video pixels to be seen as spread in distance, thus smearing. In one embodiment, the PRU may counter this by shifting the images of the primary color planes in accord with the head motion such that each flash puts the pixels in the same place on the user retina, which causes the colors to combine properly into the desired image.
In general, many AR/VR applications may be improved by a higher frame rate in the display. Often this is limited by the capability of the main computer and its GPU. Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
In particular, a projected AR headset may have moving mirrors or other optics that, synchronized with eye tracking, sweep the projection from side to side, or both side to side and up and down, keeping the projected image centered with respect to the user's eyes, thus providing a nearly fovial display. The result of this directed projection is to give the effect of substantially higher resolution when the light from the images is returned by a retroreflective screen. In such an embodiment the GPU would be rendering over much wider a view than the projectors can project, while the PRU selects a slice of that render that centers on the fovial retina. While the user's eyes move across, the main GPU need not re-render the scene as long as the eyes are pointed within a bound that can be serviced by the PRU.
In the case of using projection onto the retroreflective surface, the PRU may receive information regarding the surface optical characteristics encoded with the tracking information such that the PRU can correct for different color reflectance per surface and per angle of projection.
One of the benefits of moving the predistortion (dewarp) operations into the PRU is that the interface to the HMD can be standardized as in the cases of current video projector interfaces. The interface for a commercial 3D video projector or monitor could be an example. By following such a standard, the operation of the main computer and its GPU is simplified, as if it were rendering for such a display (useful to have image stabilization done by the PRU), and the HMD may then have the capability to be plug-compatible with these displays so that it can show either 2D or 3D video productions. The PRU equipped HMD may also use its tracking ability to show expanded views of otherwise prerecorded videos, or view live teleconferences by line of sight. Similar applications may present themselves in the area of viewing remote scenes such as in security operations.
In one embodiment, the PRU 305 receives compressed video rendered by the CPU and GPU. As illustrated in
While various individual features and functions of the PRU 305 have been described, it will be understood that combinations and subcombinations of the features are contemplated. For example, one or more rules may be provided to enable or disable individual functions or their combinations and subcombinations.
It will be understood that criteria for determining when specific modes of operation of the PRU 305 are triggered may be determined in view of various considerations. One consideration in adapting to changes in user head or eye position and/or a computer slowdown or overload condition is latency. Latency in an AR/VR system should generally be kept as small as a few milliseconds in order to prevent disorientation and other unpleasant sensations, and to promote the illusion of real presence. Thus, criteria to determine when specific functions of the PRU 305 are activated may be determined for a particular implementation based on an empirical determination of keeping latency low enough to achieve a pleasant user experience for AR/VR games of interest with external computer systems having a given range of capabilities. For example, the PRU may be designed to improve the operation of AR/VR systems based on augmenting the types of external computing platforms many consumers already have.
In an AR system with projection optics the angle of incidence with respect to the reflective or retroreflective surface is an important consideration. In particular, the image quality and brightness are a function of the angle of incidence with respect to the reflective or retroreflective surface. In one embodiment, the HMD projects images to be returned via a reflective or retroreflective surface and the PRU 305 determines an adjustment based on the angle of incidence of the projection with respect to the surface. In one embodiment the PRU adjusts projection options of the HMD based on the angle of incidence.
Additional background information on HMDs, tracking, motion sensors, and projection based system utilizing retroreflective surfaces and related technology is described in several patent applications of the Assignee of the present application. The following US patent applications of the Assignee are hereby incorporated by reference in their entirety: U.S. application Ser. No. 14/733,708 “System and Method For Multiple Sensor Fiducial Tracking,” Ser. No. 14/788,483 “System and Method for Synchronizing Fiducial Markers,” Ser. No. 14/267,325 “System and Method For Reconfigurable Projected Augmented Virtual Reality Appliance,” Ser. No. 14/267,195 “System and Method to Identify and Track Objects On A Surface,” and Ser. No. 14/272,054 “Two Section Heat Mounted Display.”
While the invention has been described in conjunction with specific embodiments, it will be understood that it is not intended to limit the invention to the described embodiments. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims. The present invention may be practiced without some or all of these specific details. In addition, well known features may not have been described in detail to avoid unnecessarily obscuring the invention. In accordance with the present invention, the components, process steps, and/or data structures may be implemented using various types of operating systems, programming languages, computing platforms, computer programs, and/or computing devices. In addition, those of ordinary skill in the art will recognize that devices such as hardwired devices, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), optical MEMS, or the like, may also be used without departing from the scope and spirit of the inventive concepts disclosed herein. The present invention may also be tangibly embodied as a set of computer instructions stored on a computer readable medium, such as a memory device.
The present application claims the benefit and priority to provisional application 62/115,874, the contents of which are hereby incorporated by reference. The present application also claims the benefit and priority to the following provisional application Nos. 62/135,905“Retroreflective Light Field Display”; 62/164,898 “Method of Co-Located Software Object Protocol,” 62/165,089 “Retroreflective Fiducial Surface”; 62/190,207 “HMPD with Near Eye Projection,” each of which are also hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
62115874 | Feb 2015 | US | |
62135905 | Mar 2015 | US | |
62164898 | May 2015 | US | |
62165089 | May 2015 | US | |
62190207 | Jul 2015 | US |