The following relates generally to systems and methods for augmented and virtual reality environments, and more specifically to systems and methods for displaying physical environment in the display of a head mounted device.
The range of applications for augmented reality (AR) and virtual reality (VR) visualization has increased with the advent of wearable technologies and 3-dimensional (3D) rendering techniques. AR and VR exist on a continuum of mixed reality visualization.
In embodiments, a method for simultaneously displaying, in a head mounted display disposed upon a user in a physical environment, a physical image stream of a captured region of the physical environment captured within the field of view an imaging system of the head mounted display, and an augmented reality rendered image stream generated by a processor for the physical environment. The method comprises: determining the captured region; generating a rendered image stream for a region of a map of the physical environment at least partially corresponding to the captured region; simultaneously receiving and displaying the physical image stream and the rendered image stream on a display system of the head mounted display.
In embodiments, a system is described for matching an augmented reality rendered image stream to a physical image stream of a region of a physical environment captured in the field of view of an imaging system of an HMD. The system comprises a processor configured to: obtain a map of the physical environment; determine the captured region; generate a rendered image stream for a region of the map at least partially corresponding to the captured image stream.
These and other embodiments are described herein.
A greater understanding of the embodiments will be had with reference to the Figures, in which:
It will be appreciated that for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the embodiments described herein. Also, the description is not to be considered as limiting the scope of the embodiments described herein.
It will also be appreciated that any module, unit, component, server, computer, terminal or device exemplified herein that executes instructions may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the device or accessible or connectable thereto. Any application or module herein described may be implemented using computer readable/executable instructions that may be stored or otherwise held by such computer readable media and executed by the one or more processors.
The present disclosure is directed to systems and methods for augmented reality (AR). However, a skilled reader will appreciate that the term “AR” may encompass several meanings. In the present disclosure, AR includes: the interaction by a user with real physical objects and structures along with virtual objects and structures overlaid thereon; and the interaction by a user with a fully virtual set of objects and structures that are generated to include renderings of physical objects and structures and that may comply with scaled versions of physical environments to which virtual objects and structures are applied, which may alternatively be referred to as an “enhanced virtual reality”. Further, the virtual objects and structures could be dispensed with altogether, and the AR system may display to the user a version of the physical environment which solely comprises an image stream of the physical environment. Finally, a skilled reader will also appreciate that by discarding aspects of the physical environment, the systems and methods presented herein are also applicable to virtual reality (VR) applications, which may be understood as “pure” VR. For the reader's convenience, the following refers to “AR” but is understood to include all of the foregoing and other variations recognized by the skilled reader.
User engagement in an AR system may be enhanced by allowing a user to move throughout a physical space in an unconstrained manner. It will be appreciated, however, that a user equipped with a head mounted display (HMD) is preferably aware of obstacles within the physical environment in order to move about it and not accidentally make contact with the obstacles.
Thus, user engagement and user safety for an AR system may be enhanced by displaying substantially real-time images of at least part of the physical environment to the user on an HMD.
Systems and methods are described herein for displaying parts of a physical environment on the HMD of a user occupying the physical environment.
Referring now to
A user equipped with the HMD 12 and situated in a physical environment may move about, and interact with, the physical environment while viewing a VR on the display system 121 of the user's HMD 12.
In certain AR applications, the user simply views an entirely rendered environment bearing no relation to the physical environment (i.e., a “pure” VR application). In such applications, then, the user's engagement with the physical environment may be purely as a space within which to move. For example, in an application in which the user exercises, the physical environment may serve as a platform within which the user may perform calisthenics, aerobics, resistance training or other suitable exercises. The AR displayed to the user, then, may not account for obstacles within the physical space. However, the user may wish or need to view the physical space in substantially real-time to ensure that she does not encounter obstacles or boundaries within the physical environment and thereby sustain an injury. The present system and method enables the user to view the physical environment simultaneously as interacting with the VR rendered environment notwithstanding that the VR environment itself need not account for the physical environment.
In other applications, however, the user views an AR comprising a completely rendered version of the physical environment (i.e., “enhanced VR”). In such applications, the user may determine the locations for obstacles or boundaries in the physical environment based solely on the rendering displayed to her in the display system 121 of her HMD 12. However, the user may still need or prefer to view a substantially real-time image stream of the physical environment.
In still other applications, the user views an AR comprising computer-generated renderings (a “rendered image stream”) in conjunction with an image stream of the physical environment (a “physical image stream”). In such applications, the user may still need or wish to view a substantially real-time physical image stream free of rendered effects.
The HMD 12 may therefore implement one or more techniques to display a substantially real-time image stream of the physical environment (a “physical image stream”) to the user. In aspects, the HMD 12 may invoke picture-in picture (PIP), picture-outside-picture, picture and picture, or multi-display techniques to display the physical image stream to the user, as described herein in greater detail. In further aspects, the HMD 12 may implement watermark overlapping to display the physical image stream to the user, as described herein in greater detail. It will be appreciated that in pure VR applications, the physical image stream and the rendered image stream are entirely unrelated. However, in AR and enhanced VR applications, the rendered image stream is based on the physical environment. In such applications, the processor preferably matches the rendered image stream to the rendered image stream, as described herein.
In AR and enhanced VR applications, the HMD may display both the rendered and physical image streams to the user, either using PIP and related techniques, or by combining the rendered and physical image streams for display.
The HMD 12 invokes the systems in the scanning, positioning and orientation determining module 141 in conjunction with the processor 120 to, respectively, scan and map the physical environment, obtain real time positioning for the HMD 12 within the physical environment, and determine the orientation of the HMD 12. The systems in the scanning, positioning and orientation determining module 141 may comprise one or more of: a scanning laser range finder (SLRF), which may constitute the scanning and local positioning systems; a laser positioning system having a laser emitter and/or receiver configured to determine the position of the HMD 12 with respect to correspondingly opposite laser receivers or emitters located at known locations throughout the physical environment; a 3-axis magnetic orientation system having a 3-axis magnetic source or sensor configured to determine the location and/or orientation of the HMD 12 with respect to a correspondingly opposite 3-axis magnetic sensor or source having a known orientation and location within the physical environment.
The processor 130 may map the physical environment by generating a virtual map, such as, for example, a point cloud, of the physical environment using measurements of the physical environment provided by the scanning system. The processor 130 may assign a coordinate system based on world coordinates to the map of the physical environment. The processor 130 further generates AR renderings for the map, such as, for example, virtual objects, effects, characters and/or other suitable CGI. The processor associates all points in the AR rendered image stream with the physical environment map, such that it is operable to associate AR rendered images with particular regions of the physical environment.
As previously described, the HMD may further comprise an imaging system configured to generate an image stream of the physical environment. The imaging system may comprise at least one camera 123 which provides the image stream of the physical environment to the processor 130 or directly to the display system 121. The processor 130 is configured to determine, for a given point in time, the field of view for the at least one camera 123 of the imaging system. As shown in
When the camera captures an image stream of the physical environment, the captured physical image stream at any given moment will comprise elements of the physical environment lying within the field of view of the camera at that time.
The physical image stream obtained by the camera is either transmitted to the processor for processing and/or transmission to the display system, or directly to the display system for display to the user.
Referring now to
Referring again to
Alternatively, the processor may generate a rendered image stream that is offset, enlarged, reduced or which has a different aspect ratio by rendering elements in a region of the map that, respectively, is offset, enlarged or reduced, or that falls within a wider or narrower field of view than the corresponding region captured in the physical image stream.
At block 305, the processor transmits the rendered image stream to the display unit for display. In order to accurately match the rendered and physical image streams while the user is moving throughout the physical environment, the display system preferably receives a rendered image stream generated based substantially on the coordinates, orientation and field of view of the physical camera at the same time of capture as the portion of the physical image stream being displayed. If the fields of view of the virtual and physical cameras are substantially aligned and identical, simultaneous and combined display of both image streams provides a combined stream that is substantially matched. Alternatively, if the fields of view of the physical and virtual cameras are offset from each other, at block 307 the processor adjusts the screen coordinates (i.e., the coordinates on the display screen of the HMD) of the elements in the rendered image stream to align with the screen coordinates of the corresponding physical elements in the physical image screen. The processor determines screen coordinates for the physical image stream and the rendered image stream by invoking suitable view transformation techniques based on known parameters for the display system, and the determined orientation, location and field of view for each of the physical and virtual cameras. The processor transmits the adjusted rendered image stream to the display system substantially simultaneously, as described above with respect to block 307. Although the coordinates of the elements in each of the physical and virtual image streams will be substantially matched, only a partial overlay will be displayed. For example, if the virtual camera had a smaller field of view at block 303 than the physical camera, the overlaid image displayed on the display system will show a partial overlay in which the rendered image stream is only overlaid over a correspondingly smaller region of the physical image stream as displayed.
In embodiments, the processor may increase or decrease the signal strength of one or the other of the physical and rendered image streams to vary the effective transparency. Referring to
Referring now to
At block 603, the processor generates a rendered image stream captured by a virtual camera directed at the map of the physical environment. The processor includes in the rendered image stream all rendered elements that would be visible within the field of view of the virtual camera. The processor may generate a rendered image stream which directly matches the physical image stream by displaying the rendered elements which would be visible within the field of view of a virtual camera having coordinates, orientation and field of view corresponding to the coordinates, orientation and field of view of the camera of the imaging system (recall that the processor associates the coordinates of the map to world coordinates when generating the map).
Alternatively, the processor may generate a rendered image stream that is offset, enlarged, reduced or which has a different aspect ratio by rendering elements in a region of the map that, respectively, is offset, enlarged or reduced, or that falls within a wider or narrower field of view than the corresponding region captured in the physical image stream.
At block 605, the processor transmits the rendered image stream to the display unit for display. The processor and the display screen are configured to display, preferably selectively, the physical and rendered image streams substantially simultaneously, as previously described. However, in the present method, the two image streams are not overlaid; rather, the each stream is simultaneously displayed in a discrete region of the display system, for example, in PIP format, as shown in
Referring now to
In embodiments, the processor only causes the display system to display the physical image stream upon the user selecting display of the physical image stream. In further embodiments, the processor causes the display system to display the physical image stream in response to detecting proximity to an obstacle in the physical environment. In still further embodiments, the processor increases the transparency of the rendered image stream in response to detecting proximity to an obstacle in the physical environment. Conversely, the processor may reduce the transparency of the rendered image stream as the HMD moves away from obstacles in the physical environment.
In still further embodiments, the display system displays the physical and rendered image streams according to at least two of the techniques described herein.
Although the following has been described with reference to certain specific embodiments, various modifications thereto will be apparent to those skilled in the art without departing from the spirit and scope of the invention as outlined in the appended claims. The entire disclosures of all references recited above are incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CA2014/050958 | 10/3/2014 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
61886443 | Oct 2013 | US |