The following relates generally to systems and methods for augmented and virtual reality environments, and more specifically to systems and methods for location tracking in dynamic augmented and virtual reality environments.
The range of applications for augmented reality (AR) and virtual reality (VR) visualization has increased with the advent of wearable technologies and 3-dimensional (3D) rendering techniques. AR and VR exist on a continuum of mixed reality visualization.
In embodiments, a local positioning system for determining a position of a user interacting with an augmented reality of a physical environment on a wearable display. The system comprises: at least one emitter, located at a known location in the physical environment, to emit a signal; a receiver disposed upon the user to detect each signal; and a processor to: (i) determine, from the at least one signal, the displacement of the receiver relative to the at least one emitter; and (ii) combine the displacement with the known location.
In further embodiments, a method is described for determining a position of a user interacting with an augmented reality of a physical environment on a wearable display, the method comprising: by a receiver disposed upon the user, detecting each signal from each of at least one receiver with a corresponding known location within the physical environment; in a processor, determining, from the at least one signal, the displacement of the receiver relative to the at least one emitter, and combining the displacement with the known location for at least one emitter.
These and other embodiments are contemplated and described herein in greater detail.
A greater understanding of the embodiments will be had with reference to the Figures, in which:
It will be appreciated that for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the embodiments described herein. Also, the description is not to be considered as limiting the scope of the embodiments described herein.
It will also be appreciated that any module, unit, component, server, computer, terminal or device exemplified herein that executes instructions may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the device or accessible or connectable thereto. Any application or module herein described may be implemented using computer readable/executable instructions that may be stored or otherwise held by such computer readable media and executed by the one or more processors.
The present disclosure is directed to systems and methods for augmented reality (AR). However, the term “AR” as used herein may encompass several meanings. In the present disclosure, AR includes: the interaction by a user with real physical objects and structures along with virtual objects and structures overlaid thereon; and the interaction by a user with a fully virtual set of objects and structures that are generated to include renderings of physical objects and structures and that may comply with scaled versions of physical environments to which virtual objects and structures are applied, which may alternatively be referred to as an “enhanced virtual reality”. Further, the virtual objects and structures could be dispensed with altogether, and the AR system may display to the user a version of the physical environment which solely comprises an image stream of the physical environment. Finally, a skilled reader will also appreciate that by discarding aspects of the physical environment, the systems and methods presented herein are also applicable to virtual reality (VR) applications, which may be understood as “pure” VR. For the reader's convenience, the following refers to “AR” but is understood to include all of the foregoing and other variations recognized by the skilled reader. Systems and methods are provided herein for generating and displaying AR representations of a physical environment occupied by a user.
In embodiments, a system is configured to survey and model in 2- and/or 3-dimensions a physical environment. The system is further configured to generate AR layers to augment the model of the physical environments. These layers may be dynamic, i.e., they may vary from one instance to the next. The layers may comprise characters, obstacles and other graphics suitable for, for example, “gamifying” the physical environment by overlaying the graphics layers onto the model of the physical environment.
The following is further directed to a design and system layout for a dynamic environment and location in which an augmented reality system allows users to experience an actively simulated or non-simulated indoor or outdoor augmented virtual environment based on the system adaptively and dynamically learning its surrounding physical environment and locations.
In still further aspects, the following provides dynamic mapping and AR rendering of a physical environment in which a user equipped with a head mounted display (HMD) is situated, permitting the user to interact with the AR rendered physical environment and, optionally, other users equipped with further HMDs.
In yet further aspects, the following provides an HMD for displaying AR rendered image streams of a physical environment to a user equipped with an HMD and, optionally, to other users equipped with further HMDs or other types of displays.
Referring now to
As shown in schematic form in
Referring now to
Communication 20 between the various components of the system is effected through one or more wired or wireless connections, such as for example, Wi-Fi, 3G, LTE, cellular or other suitable connection.
As previously described, each HMD 12 generates signals corresponding to sensory measurements of the physical environment and the processor receives the signals and executes instructions relating to imaging, mapping, positioning, rendering and display. While each HMD 12 may comprise at least one embedded processor to carry out some or all processing tasks, the HMD 12 may alternatively or further delegate some or all processing tasks to the server 300 and/or the console 11. The server 300 may act a master device to the remaining devices in the system. In embodiments, the system 10 is configured for game play, in which case the server 300 may manage various game play parameters, such as, for example, global positions and statistics of various players, i.e., users, in a game. It will be appreciated that the term “player” as used herein, is illustrative of a type of “user”.
Each HMD 12 may not need to delegate any processing tasks to the server 300 if the console 11 or the processor embedded on each HMD is, or both the console and the processor embedded on each HMD together are, capable of performing the processing required for a given application. In embodiments, at least one HMD 12 may serve as a master device to the remaining devices in the system.
The console 11 is configured to communicate data to and from the server 300, as well as at least one HMD 12. The console 11 may reduce computational burdens on the server 300 or the processor embedded on the HMD 12 by locally performing computationally intensive tasks, such as, for example, processing of high level graphics and complex calculations. In particularly computationally demanding applications, for example, the network 17 connection to the server 300 may be inadequate to permit some types of remote processing.
Each HMD 12 may be understood as a subsystem to the system 10 in which each HMD 12 acts as a master to its peripherals, which are slaves. The peripherals are configured to communicate with the HMD 12 via suitable wired or wireless connections, and may comprise, for example, an emitter 13 and a receiver 14.
The peripherals may enhance user interaction with the physical and rendered environments and with other users. For example, the emitter 13 of a first user may emit a signal (shown in
The console 11 may collect any type of data common to all HMDs in the field. For example, in a game of laser tag, the console 11 may collect and process individual and team scores. The console 11 may further resolve conflicts arising between HMDs in the field, especially conflicts involving time. For example, during a laser tag game, two players may “tag” or “hit” each other at the approximately the same time. The console 11 may exhibit sufficient timing accuracy to determine which player's hit preceded the other's by, for example, assigning a timestamp to each of the reported tags and determining which timestamp is earlier.
The console may further resolve positioning and mapping conflicts. For example, when two players occupy the same physical environment, both users occupy the same map of the physical environment. Mapping is described herein in greater detail. The console 11 therefore tracks the position of each player on the map so that any AR rendering displayed to each player on her respective HMD 12 reflects each player's respective position. When multiple users equipped with HMDs 12 are situated in the same physical environment, their respective HMDs may display analogous renderings adjusted for their respective positions and orientations within the physical environment. For example, in a game of augmented reality laser tag, if a rear player located behind a front player fires a beam past the front player, the front player sees a laser beam fired past him by the rear player, without seeing the rear player's gun.
By displaying AR renderings of the physical environment to each user, it will be appreciated that each user may experience the physical environment as a series of different augmented environments. In one exemplary scenario, by varying the display to the user on his HMD 12 with appropriate AR details, a user situated in a physical room of a building may experience the physical room first as a room in a castle and then second as an area of a forest.
As shown in
It will be appreciated that the present systems and methods, then, enable interaction with a physical environment as an AR scene of that environment. The HMD may be central to each user's experience of the physical environment as an AR environment in which the user may experience, for example, game play or training. As shown in
As previously described with respect to
With reference to
Further, since the scanning system is mounted to a user, rather than to a fixed location within the physical environment, scanning and mapping are inside-out (i.e., scanning occurs from the perspective of the user outwards toward the physical environment, rather than from the perspective of a fixed location in the physical environment and scanning the user) enabling dynamic scanning and mapping. As a user traverses and explores a physical environment, the scanning system and the processor cooperate to learn and render an AR scene comprising the physical environment based at least on the dynamic scanning and mapping.
The HMD may scan and map regions of the physical environment even before displaying AR for those regions to the user. The scanning system may “see” into corridors, doors, rooms, and even floors. Preferably, the scanning system scans the physical environment ahead of the user so that AR renderings for that portion of the physical environment may be generated in advance of the user's arrival there, thereby mitigating any lag due to processing time. The HMD may further create a “fog of war” by limiting the user's view of the rendered physical environment to a certain distance (radius), while rendering the AR of the physical environment beyond that distance.
The scanning system may comprise a scanning laser range finder (SLRF) or an ultrasonic rangefinder (USRF), each of which scans the physical environment by emitting a signal, whether a laser beam or an ultrasonic signal, as the case may be, towards the physical environment. When the signal encounters an obstacle in the physical environment, the signal is reflected from the obstacle toward the scanning system. The scanning system either calculates the amount of time between emission and receipt of the signal, or the angle at which the signal returns to the scanner/range finder to determine the location of the obstacle relative to the scanning system. The scanning system may surround the HMD 12, as shown in
As described with reference to
When the laser beam 731 is emitted, the time-of-flight IC records the departure angle and time; upon bouncing off an obstacle in the physical environment, the laser beam 731 is reflected back toward the SLRF 700 where it detected by at least one photo diode 703. The return time and angle are recorded, and the distance travelled is calculated by the MCU in conjunction with the time-of-flight IC. Alternatively, the laser beam 731, after being emitted, may encounter a receiver in the physical environment. The receiver signals receipt of the beam to the console, server, or processor in the HMD and the time of receipt is used to calculate the distance between the SLRF and the receiver in the environment, as hereinafter described. It will be appreciated that a USRF might operate in like manner with ultrasonic emission.
The SLRF 700 may comprise an optical beam splitter 705 in conjunction with two photodiodes 703 to serve one or more functions as described herein. First, scanning speeds may be doubled for any given rotation speed by splitting the laser beam 731 into two beams, each directed 180° away from the other. Second, scanning accuracy may be increased by splitting the beam into two slightly converging beams, such as, for example, by a fraction of one degree or by any other suitable angle. By directing two slightly diverging beams into the physical space, signal errors, distortions in the surface of any obstacles encountered by the beams, and other distortions may be detected and/or corrected. For instance, because the first and second slightly divergent beams should, in their ordinary course, experience substantially similar flight times to any obstacle (because of their only slight divergence), any substantial difference in travel time between the two beams is likely to correlate to an error. If the processor and/or time-of-flight IC detects a substantial difference in flight time, the processor and/or time-of-flight IC may average the travel time for the divergent beams or discard the calculation and recalculate the time-of-flight on a subsequent revolution of the emitter. Third, as shown in
The SLRF 700 may further comprise one-way optics for collimating the at least one laser beam as it is emitted, and converging returning laser beams.
As previously outlined, the scanning system may be disposed upon an HMD worn by a user. However, it will be appreciated that a user moving throughout a physical environment is likely to move his head and/or body, thereby causing the HMD and, correspondingly, the scanning system to constantly move in 3 dimensions and about 3 axes, as shown in
As shown in
Therefore, as shown in
The stabiliser unit 835 pivotally retains the scanning system 831 above the HMD 812. The scanning system 831 directs scanning beams 803 tangentially from the apex 807 of the user's head 805, i.e., level to the earth's surface, as in
The stabiliser unit may comprise one or more of the following: a two- or three-axis gimbal for mounting the scanner; at least one motor, such as brushless or servo motors for actuating the gimbal; a gyroscope, such as a two- or three-axis gyroscope, or a MEMS gyroscope, for detecting the orientation of the scanner; and a control board for controlling the gimbal based on the detected orientation of the gyroscope.
A stabiliser unit configuration is shown in
An alternate stabiliser unit configuration is shown in
In embodiments, the scanning system only provides readings to the processor if the scanning system is level or substantially level, as determined by the method shown in
The control board may be any suitable type of control board, such as, for example, a Martinez gimbal control board. Alternatively, the stabiliser unit may delegate any controls processing to the processor of the HMD.
As shown in
While the scanning system performs scanning for mapping the physical environment, the HMD comprises a local positioning system (LPS) operable to dynamically determine the user's position in 2D or 3D within the physical environment. The LPS may invoke one or more ultrasonic, radio frequency (RF), Wi-Fi location, GPS, laser range finding (LRF) or magnetic sensing technologies. Further, the scanning system and the LPS may share some or all components such that the same system of components may provide serve both scanning and positioning functions, as will be appreciated.
The LPS may comprise at least one LPS receiver placed on the HMD or the user's body and operable to receive beacons from LPS emitters placed throughout the physical environment. The location for each LPS emitter is known. The LPS calculates the distance d travelled by each beam from each LPS emitter to the at least one LPS receiver on the user's body according to time-of-flight or other wireless triangulation algorithms, including, for example, the equation d=C·t, where C is a constant representing the speed at which the beam travels and t represents the time elapsed between emission and reception of the beam. It will be appreciated that the constant C is known for any given beam type; for a laser beam, for example, C will be the speed of light, whereas for an ultrasonic beam, C will be the speed of sound. Upon thereby calculating the distance between the at least one LPS receiver and at least three LPS emitters disposed at known, and preferably fixed, positions in the physical environment, the LPS trilaterates the distances to determine a location for the user and her HMD in the physical environment. Although at least three receivers are required for determining the local position of a user, increasing the number of receivers within the physical environment results in greater accuracy.
Trilateration involves determining the measured distances between the LPS receivers and the LPS transmitter, using any of the above described techniques, and solving for the location of the LPS based on the distances and the known locations of the LPS receivers. As shown in
Each user's position, once determined by the LPS, may then be shared with other users in the physical environment by transmitting the position to the central console or directly to the HMDs of other users. When multiple users occupying the same physical environment are equipped with HMDs having local positioning functionality configured to share each user's positions with the other users, some or all of the users may be able determine where other users are located within the environment. Users' respective HMDs may further generate renderings of an AR version of the other users for viewing by the respective user, based on the known locations for the other users.
While the LPS has been described above with reference to the LPS emitters being located in the physical environment and the LPS receivers being located on the user's body or HMD, the LPS emitters and LPS receivers could equally be reversed so that the LPS receivers are located within the physical environment and at least one LPS emitter is located on the user's body or HMD.
As previously, described with reference to the SLRF of
By solving analogous versions of the last quadratic equation for each of y1, y2, and y3, it will be appreciated that the processor will then have sufficient information to determine the location for the receiver 1231.
Referring now to
The use of 3-axis magnetic fields to provide local positioning may provide numerous advantages, including, for example:
Referring now to
As explained herein in greater detail, each emitter may emit a modulated signal and a corresponding receiver may detect and demodulate the signal to obtain metadata for the signal. For example, a receiver on an HMD may detect a modulated IR signal emitted from an IR emitter in the physical environment. The modulated signal may be emitted at a given frequency; correspondingly, the receiver may be configured to detect the frequency, and a processor may be configured to extract metadata for the signal based on the detected frequency. The metadata may correlate to the coordinates of the emitter within the physical space, or the unique ID for the emitter. If the metadata does not comprise location information for the emitter, but it does comprise the unique ID for the emitter, the processor may generate a query to a memory storing the locations for the emitters in the physical environment. By providing the ID information extracted from the IR signal, the processor may obtain the location information associated with the ID from memory. Signal modulation systems and methods are described herein in greater detail.
It will be appreciated that many physical environments, such as, for example, a building with a plurality of rooms, contain obstacles, such as walls, that are prone to break the path travelled by an emitted beam of an LRF. In such environments, ultrasonic or magnetic positioning may provide advantages over laser positioning, since ultrasonic signals may be suited to transmission irrespective of line of sight. As shown in
In embodiments, a scanning laser range finder may serve as the positioning and scanning system. For example, an SLRF may provide scanning, as previously described, as well as positioning in cooperation with emitters and/or receivers placed at known locations in the physical space. Alternatively, once the processor has generated the initial map for the physical space based on readings provided by the SLRF, subsequent dynamic SLRF scanning of the physical space may provide sufficient information for the processor to calculate the position and orientation of the HMD comprising the SLRF with reference to changes in location of mapped features of the physical environment. For example, if the map for the physical environment, which was generated based on the SLRF having an initial orientation θSLRF and initial coordinates in world space XSLRF, YSLRF, comprises a feature having world coordinates X, Y the processor may determine an updated location XSLRF′, YSLRF′ and orientation θSLRF′ for the HMD based on any changes in the relative location of the feature.
Further, the LPS may comprise ultrasonic, laser or other suitable positioning technologies to measure changes in height for the HMD. For example, in a physical environment comprising a ceiling have a fixed height, an ultrasonic transmitter/emitter directed towards the ceiling may provide the height of the HMD at any time relative to a height of the HMD at an initial reading. Alternatively, the height of the HMD may be determined by equipping a user equipped with a magnetic positioning system with either of a magnetic emitter or a magnetic sensor near her feet and the other of the magnetic emitter or magnetic sensor on her HMD and determining the distance between the magnetic emitter and magnetic sensor.
The HMD may further comprise a 9-degree-of-freedom (DOF) inertial measurement unit (IMU) configured to determine the direction, orientation, speed and/or acceleration of the HMD and transmit that information to the processor. This information may be combined with other positional information for the HMD as determined by the LPS to enhance location accuracy. Further, the processor may aggregate all information relating to position and motion of the HMD and peripherals to enhance redundancy and positional accuracy. For example, the processor may incorporate data obtained by the scanning system to enhance or supplant data obtained from the LPS. The positions for various peripherals, including those described herein, may be determined according to the same techniques described above. It will be appreciated that a magnetic positioning system, such as described herein, may similarly provide information to the processor from which the direction, orientation, speed and/or acceleration of the HMD and other components and/or systems equipped therewith, instead of, or in addition to other inertial measurement technologies. Therefore, it will be understood that the inertial measurement unit may be embodied by an LPS invoking magnetic positioning.
As previously described, and as will be appreciated, the outputs of the LPS, the IMU and the scanner are all transmitted to the processor for processing.
AR rendering of the physical environment, which occurs in the processor, may further comprise obtaining imaging for the physical environment; however, it will be understood that a user may engage with an AR based on the physical environment without seeing any imaging for the physical environment. For example, the AR may contain only virtual renderings of the physical, although these may be modelled on the obstacles and topography of the physical environment. In embodiments, the degree to which the AR comprises images of the physical environment may be user-selectable or automatically selected by the processor. In yet another embodiment, the display system comprises a transparent or translucent screen onto which AR image streams are overlayed, such that the AR presented to a user may incorporate visual aspects of the physical environment without the use of an imaging system. This may be referred to as “see-through” AR. See-through AR may be contrasted with “pass-through” AR, in which an imaging system to capture an image stream of the physical environment electronically “passes” that stream to a screen facing the user. The HMD may therefore comprise an imaging system to capture an image stream of the physical environment.
The processor renders computer generated imaging (CGI) which may comprise an overlay of generated imaging on a rendering of the physical environment to augment the output of the imaging system for display on the display system of the HMD. The imaging system may comprise at least one camera, each of which may perform a separate but parallel task, as described herein in greater detail. For example, one camera may capture standard image stream types, while a second camera may be an IR camera operable to “see” IR beams and other IR emitters in the physical environment. In an exemplary scenario, the IR camera may detect an IR beam “shot” between a first and second player in a game. The processor may then use the detection as a basis for generating CGI to overlay on the IR beam for display to the user. For example, the processor may render the “shot” as a green beam which appears on the user's display system in a suitable location to mimic the “shot” in the rendering of the physical environment. In embodiments, elements, such as, for example, other users' peripherals, may be configured with IR LEDs as a reference area to be rendered. For example, a user may be equipped with a vest comprising an IR LED array. When the user is “shot”, the array is activated so that other users' HMDs detect, using monochrome cameras, the IR light from the array for rendering as an explosion, for example. Through the use of multiple cameras operable to capture different types of light within the physical environment, the processor may thereby render a highly rich and layered AR environment for a given physical environment.
The at least one camera of the imaging system may be connected to the processor by wired or wireless connections suitable for video streaming, such as, for example, I2C, SPI, or USB connections. The imaging system may comprise auto focus cameras each having an external demagnification lens providing an extended wide filed-of-view (FOV), or cameras having wide FOV fixed focus lenses. The imaging system may capture single or stereo image streams of the physical environment for transmission to the processor.
Each camera may further be calibrated to determine its field-of-view and corresponding aspect ratio depending on its focus. Therefore, for any given camera with a known aspect ratio at a given focal adjustment, the processor may match the screen and camera coordinates to world coordinates for points in an image of the physical environment.
As shown in
The processor may collect data from the other components described herein, as shown in
In an exemplary scenario as shown in
As previously described, all processing tasks may be performed by one or more processors in each individual HMD within a physical environment, or processing tasks may be shared with the server, the console or other processors external to the HMDs.
In at least one exemplary configuration for a processor, as shown in
The processor may be a mobile computing device, such as a laptop, a mobile phone or a tablet. Alternatively, the processor may be a microprocessor onboard the HMD. In embodiments, as shown in
Regardless of the physical configuration of the at least one processor, processing to AR render the physical environment in which at least one user is situated may comprise generating AR graphics, sounds and other sensory feedback to be combined with the actual views of the physical environment for engaging with the at least one user.
Referring to
Other possible augmentation may include applying environmental layers, such as, for example, rain, snow, fog and smoke, to the captured images of the physical environment. The processor may even augment features of the physical environment by, for example, rendering topographical features to resemble rugged mountains, rendering barren “sky” regions as wispy clouds, rendering otherwise calm water bodies in the physical environment as tempestuous seas, and/or adding crowds to vacant areas.
Expression based rendering techniques performed by the processor may be invoked to automate graphical animation of “living” characters added to the AR rendering. For example, characters may be rendered according to anatomical models to generate facial expressions and body movements.
The processor may further invoke enhanced texture mapping to add surface texture, detail, shading and colour to elements of the physical environment.
The processor may comprise an image generator to generate 2D or 3D graphics of objects or characters. It will be appreciated that image generation incurs processing time, potentially leading to the user perceiving lag while viewing the AR rendered physical environment. To mitigate such lag, the processor buffers the data from the at least one camera and rendering the buffered image prior to causing the display system to display the AR rendered physical environment to the user. The image generator preferably operates at a high frequency update rate to reduce the latency apparent to the user.
The image generator may comprise any suitable engine, such as, for example, the Unity game engine or the Unreal game engine, to receive an image feed of the physical environment from the imaging system and to generate AR and/or VR objects for the image feed. The image generator may retrieve or generate a wire frame rendering of the object using any suitable wire frame editor, such as, for example, the wire frame editor found in Unity. The processor further assigns the object and its corresponding wire frame to a location in a map of the physical environment, and may determine lighting and shading parameters at that location by taking into account the shading and lighting of the corresponding location in the image stream of the physical environment. The image generator may further invoke a suitable shading technique or shader, such as, for example, Specular in the Unity game engine, in order to appropriately shade and light the object. Examples such as shadows can be filtered out through mathematical procedures. The processor may further generate shading and lighting effects for the rendered image stream by computing intensities of light at each point on the surfaces in the image stream, taking into account the location of light sources, the colour and distribution of reflected light, and even such features as surface roughness and the surface materials.
The image generator is further operable to generate dynamic virtual objects capable of interacting with the physical environment in which the user is situated. For example, if the image generator generates a zombie character for the AR rendered physical environment, the image generator may model the zombie's feet to interact with the ground on which the zombie is shown to be walking. In an additional exemplary scenario, the processor causes a generated dragon to fly along a trajectory calculated to avoid physical and virtual obstacles in the rendered environment. Virtual scenery elements may be rendered to adhere to natural tendencies for the elements. For example, flowing water may be rendered to flow towards lower lying topographies of the physical environment, as water in the natural environment tends to do. The processor may therefore invoke suitable techniques to render generated objects within the bounds of the physical environment by applying suitable rendering techniques, such as, for example, geometric shading.
The processor, then, may undertake at least the following processing tasks: it receives the image stream of the physical environment from the imaging system to process the image stream by applying filtering, cropping, shading and other imaging techniques; it receives data for the physical environment from the scanning system in order to map the physical environment; it receives location and motion data for the at least one user and the at least one device location in the physical environment to reflect each user's interaction with the physical environment; it computes game or other parameters for the physical environment based on predetermined rules; it generates virtual dynamic objects and layers for the physical environment based on the generated map of the physical environment, as well as on the parameters, the locations of the at least one user and the at least one device in the physical environment; and it combines the processed image stream of the physical environment with the virtual dynamic objects and layers for output to the display system for display to the user. It will be appreciated throughout that the processor may perform other processing tasks with respect to various components and systems, as described with respect thereto.
When a user equipped with an HMD moves throughout the physical environment, the user's HMD captures an image stream of the physical environment to be displayed to the user. In AR applications, however, AR layers generated by the processor are combined with the image stream of the physical environment and displayed to the user. The processor therefore matches the AR layers, which are rendered based at least on mapping, to the image stream of the physical environment so that virtual effects in the AR layers are displayed at appropriate locations in the image stream of the physical environment.
In one matching technique, an imaging system of an HMD comprises at least one camera to capture both the image stream of the physical environment, as well as “markers” within the physical environment. For example, the at least one camera may be configured to detect IR beams in the physical environment representing a “marker”. If the imaging system comprises multiple cameras, the cameras are calibrated with respect to each other such that images or signals captured by each camera are coordinated. In applications where the processor renders AR effects for IR beams, then, the processor may only need to combine the AR stream with the image stream for display in order to effect matching. Alternatively, the processor may need to adjust the AR stream based on known adjustments to account for different perspectives of each of the cameras contributing data to the processor.
In another matching technique, matching may be markerless, and the processor may use location, orientation and motion data for the HMD and other system components to perform matching. Markerless matching is illustrated in
X=ƒ(Y,screen aspect ratio,camera aspect ratio), and
Z=ƒ(Y,magnification,screen aspect ratio).
Y is the screen spit factor, which accounts for the distortion of the screen aspect ratio relative to the camera aspect ratio and is known for a system having fixed lenses and displays; Y is fixed for a given screen; X represents the camera field of view; and Z represents the screen field of view. The processor, then, associates screen coordinates to the world coordinates of the field of view captured by the at least one camera of the imaging system. Using the orientation and location of the HMD, the processor may determine the orientation and location of the field of view of the at least one camera and determine a corresponding virtual field of view having the same location and orientation in the map of the physical environment. Using the equations described immediately above, the processor then determines the screen coordinates for displaying the rendered image on the screen having screen split factor Y.
The display system of the HMD may comprise a display surface, such as an LCD, LED display, OLED display or other suitable electronic visual display to display image streams to the user. Additionally and alternatively, the display surface may consist of transparent, translucent, or opaque material onto which image streams are projected from a projector located elsewhere on the HMD. The display system may provide heads-up notifications generated by the processor. A user wearing the HMD may view her surrounding physical environment as an unaltered or augmented reality environment displayed on the display surface. Further, in applications where engagement with the user's physical surroundings is not required, the display system of the HMD may simply display VR or other streams unrelated to AR rendering of the physical environment in which the user is situated.
Input to the display system may be in one or more suitable formats, such as, for example, HDMI, mini HDMI, micro HDMI, LVDS, and MIPI. The display system may further accept input from various external video inputs, such as television boxes, mobile devices, gaming consoles, in various resolutions, such as, for example, 720p, 1080p, 2K and 4K.
The real-time image on the display system of the HMD may be replicated to an external output device, such as, for example, a monitor or television, for bystanders or other parties to see what the wearer of the HMD is seeing.
As shown in
If, as illustrated, the HMD display system 1831 is configured to receive MIPI inputs, whereas the external display 1833 is configured to receive DVI or HDMI inputs, and all video sources generate DVI or HDMI outputs, the HMD may comprise an embedded digital signal processor (DSP) having system-on-a-chip (SOC) 1811, as shown, configured to process DVI and HDMI streams from the HMD video source 1801 and output video in MIPI, DVI and HDMI streams. The SOC 1811 may reduce the burdens on other processor elements by combining the various input and output video streams required for displaying the AR rendered physical environment to the at least one user. Integration of the streaming algorithms within an embedded DSP may provide relatively low power processing.
The SOC 1811 provides the MIPI stream to a 2-to-1 video selector 1825. The DSP further comprises a 1-to-2 video splitter 1821 for providing two HDMI or DVI streams to each of: (i) an integrated circuit (IC) 1813, which converts the HDMI output of the external video source 1803 into a MIPI stream; and (ii) a first 2-to-1 video select 1823 to provide a combined DVI HDMI signal to the external device 1803 from the SOC 1811 and the IC 1813. A second 2-to-1 video select 1825 combines the converted (i.e., from DVI or HDMI to MIPI) HMD video stream with the MIPI stream from the (IC) 1813 to generate the stream to be displayed by the HMD display system 1831.
As shown in
In addition to visual inputs and outputs previously described in greater detail, user engagement with a physical environment may be enhanced by other types of input and output devices providing, for example, haptic or audio feedback, as well as through peripherals, such as, for example, emitters, receivers, vests and other wearables. The processor may therefore be operable to communicate with a plurality of devices providing other types of interaction with the physical environment, such as the devices described herein.
As shown in
The various LPSs 1927 or 128 in the emitter 1913 may function in the same manner as the LPSs previously described with reference to the HMD. When the user engages the trigger through the trigger switch 1938, which may be, for example, a push button or strain gauge, the microprocessor 1931 registers the user input and causes the IR LED driver 1933 to cause the IR LED 1940 source to emit a laser beam into the physical environment; however, the emitter 193 may further enhance user perception if, for example, the microprocessor initiates a solenoid providing recoil feedback 1934 to the user. The haptic feedback unit may consist of a vibrator mounted to the emitter 1913 which may be activated whenever the user attempts to initiate firing of the beam.
Biometric sensors 1937 in the emitter 1913 are configured to gather biometric information from the user and provide that information to, for example, the user's HMD. In an exemplary scenario, an increase in the user's heart rate during a laser tag game, the microprocessor may escalate haptic feedback to further excite the user, thereby adding a challenge which the user must overcome in order to progress.
When the trigger switch is depressed, the microprocessor may cause LEDs 1938 to be displayed on the emitter as a visual indication of emission of the beam. The user's HMD, which corresponds with the emitter 1913, may similarly display a visual indication of the emission in the colour wheel of the HMD's display system, as previously described.
Preferably, the IR LED source 1940 is paired with optics 1940 to collimate the IR beam. The IR LED driver 1933 modulates the beam according to user feedback and game parameters obtained from the microprocessor 1931. The LCD screen 1939 may display information, such as ammo or gun type on the surface of the emitter 1913.
Any peripheral, including the emitter and the receiver, may comprise an inertial measurement system, such as, for example, an accelerometer, an altimeter, a compass, and/or a gyroscope, providing up to 9 DOF, to determine the orientation, rotation, acceleration, speed and/or altitude of the peripheral. The various LPS and inertial measurement system components 1927 may provide information about the orientation and location of the emitter 1913 at the time the beam is emitted. This information, which is obtained by the microprocessor 1931 and transmitted to the user's HMD, other users' HMDs, the server or the console via the wireless communication interface 1926, can be used during AR rendering of the physical environment, by for example, rendering the predicted projection of IR beam as a coloured path or otherwise perceptible shot.
With reference to
Preferably, each emitter 13 in the system 10 shown in
Further, the processor may assess game parameters, such as, for example, damage suffered by a user after being hit by another user. The processor may record a hit as a point to the user whose emitter emitted a beam received in another user's receiver, and as a demerit to the other user who suffered the harm. Further, the other user's HMD 12 or receiver 14 may initiate one or more haptic, audio or visual feedback systems to indicate to that other user that he has been hit.
Referring now to
The receiver 14 may further provide visual, haptic and other sensory outputs to its user, as well as other users in the physical environment.
An exemplary receiver layout is shown in
Referring again to
Registration of a “hit”, i.e., reception of a beam, may trigger various feedback processes described herein. For example, the user's vest may comprise haptic output to indicate to the user that he has suffered a hit. Further, as described, the user's receiver 14 may comprise at least one LED 180 which the microprocessor activates in response to a hit. Similar to the emitter 13, the receiver 14 may comprise biometric sensors, such as the biometric sensors 2168 shown in
With reference now to
Referring again to
The beam emitted from an emitter to a receiver may be collimated. As shown in
The emitter initiates data transfer to the receiver via a modulated frequency signal. The data is transferred to the receiver and is processed for key game parameters such as type of gun hit, type of blast, type of impact, the user ID, and other parameters using IR communication. This allows for a more accurate reaction between multiple emitters of varying types to be processed as different type of effects. For example, if an in-game virtual IR explosion was to occur, the data transferred to the receiver will trigger an explosion-based reaction on the receiver(s) which in turn will produce a specified desired effect on the HMD(s). The HMD(s) will create imagery specific to the desired effect based on the receiver(s) IR light frequency and use this information to overlay the required visual effect.
Referring now to
In the exemplary scenario, for example, the local position of user 2403 may be determined by that user's HMD (not shown) according to, for example, trilateration, or as otherwise described herein. Further, the location and orientation of user 2403's emitter 2440 when emitting beam 2407 may be determined from the LPS and inertial measurement system of the emitter 2440. All position and orientation data for the user 2403 and her emitter 2440 may be shared with the processor of the HMD worn by the observing user, and the processor may enhance those elements for display to the display system 2410 of the observing user. The beam 2407, for example, may be rendered as an image of a bullet having the same trajectory as the beam 2407. Further, the user 2403 may be rendered as a fantastical character according to parameters for the game.
Additionally, a user's peripherals, such as a receiver 14 or HMD 12 may comprise an IR LED array 180, as previously described, and as shown in
In embodiments, each user's HMD may be equipped with at least one receiver to, for example, detect a head shot. Similarly, the HMD may further comprise biometric sensors, as previously described with respect to the emitters and receivers for providing similar enhancements. The HMD may further comprise audio and haptic feedback, as shown, for example in
While interactions between emitters and receivers have been described herein primarily as half-duplex communications, obvious modifications, such as equipping each emitter with a receiver, and vice versa, may be made to achieve full duplex communication between peripherals.
Additional peripherals in communication with the HMD may further comprise configuration switches, such as, for example push buttons or touch sensors, configured to receive user inputs for navigation through menus visible in the display system of the HMD and communicate the user inputs to the processor.
It will be appreciated that the systems and methods described herein may enhance or enable various application. For example, by using sports-specific or configured peripherals, AR sports training and play may be enabled. In a game of tennis, exemplary peripherals might include electronic tennis rackets. In a soccer game, users may be equipped with location and inertial sensors on their feet to simulate play.
Further exemplary applications may comprise role-playing games (RPGs), AR and VR walkthroughs of conceptual architectural designs applied to physical or virtual spaces, and for defence-related training.
Although the following has been described with reference to certain specific embodiments, various modifications thereto will be apparent to those skilled in the art without departing from the spirit and scope of the invention as outlined in the appended claims. The entire disclosures of all references recited above are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61886423 | Oct 2013 | US |