Increased capabilities in computer processing, such as improved real-time image and audio processing, have aided the development of powerful training simulators such as vehicle, weapon, and flight simulators, action games, and engineering workstations, among other simulator types. Simulators are frequently used as training devices which permit a participant to interact with a realistic simulated environment without the necessity of actually going out into the field to train in a real environment. For example, different simulators may enable a live participant, such as a police officer, pilot, or tank gunner to acquire, maintain, and improve skills while minimizing costs, and, in some cases the risks and dangers that are often associated with live training
Current simulators perform satisfactorily in many applications. However, customers for simulators, such as branches of the military, law enforcement agencies, industrial and commercial entities, etc., have expressed a desire for more realistic simulations so that training effectiveness can be improved. In addition, simulator customers typically seek to improve the quality of the simulated training environments supported by simulators by increasing realism in simulations and finding ways to make the simulated experiences more immersive. With regard to shooting simulations in particular, customers have shown a desire for more accurate and complex simulations that go beyond the typical shoot/no shoot scenarios that are currently available.
This Background is provided to introduce a brief context for the Summary and Detailed Description that follow. This Background is not intended to be an aid in determining the scope of the claimed subject matter nor be viewed as limiting the claimed subject matter to implementations that solve any or all of the disadvantages or problems presented above.
A simulator system includes functionality for dynamically tracking position and orientation of one or more simulation participants and objects as they move throughout a capture volume using an array of motion capture video cameras so that two- or three-dimensional (“2D” and “3D”) views of a virtual environment, which are unique to each participant's point of view, may be generated by the system and rendered on a display. In 3D and/or multi-participant usage scenarios, the unique views are decoded from a commonly utilized display by equipping the participants with glasses that are configured with shutter lenses, polarizing filters, or a combination of both. The object tracking supports the provision and use of an optical signaling capability that may be added to an object so that manipulation of the object by the participant can be communicated to the simulator system over the optical communications path that is enabled by use of the video cameras.
In various illustrative examples, the simulator system supports a shoot wall simulation where the simulated personnel (i.e., avatars) can be generated and rendered in the virtual environment so they react to the position and/or motion of the simulation participant. The gaze and/or weapon aim of the avatars, for example, will move in response to the location of the participant so that the avatars realistically appear to be looking and/or aiming their weapons at the participant. The participant's weapon may be tracked using the object tracking capability by tracking markers affixed to the weapon at known locations. A light source affixed to the weapon and operatively coupled to the weapon's trigger is actuated by a trigger pull to optically indicate to the simulator system that the participant has fired the weapon. Using the known location of the weapon gained from the motion capture, an accurate trajectory of discharged rounds from the weapon can be calculated and then realistically simulated. Use of the light source allows the motion capture system to detect weapon fire without the need for cumbersome and restrictive conventional wired or tethered interfaces.
The participant's head is tracked through motion capture of markers that are affixed to a helmet or other garment/device worn by the participant when interacting with the simulation. By correlating head position in the capture volume to the participant's gaze direction, an accurate estimate can be made as to where the participant is looking A dynamic view of the virtual environment from the participant's point of view can then be generated and rendered. Such dynamic view generation and rendering from the point of view of the participant enables the participant to interact with the virtual environment in a realistic and believable manner by being enabled, for example, to change positions in the capture volume to look around an obstacle to reveal an otherwise hidden target.
Advantageously, the present simulator system supports a richly immersive and realistic simulation by enabling the participant's interaction with the virtual environment that more closely matches interactions with an actual physical environment. In combination with accurate trajectory simulation, the participant-based point of view affords the virtual environment with the appearance and response that would be expected of a real environment—avatars react with gaze direction and weapon aim as would their real world counterparts, rounds sent downrange hit where expected, and the rendered virtual environment has realistic depth well past the plane of the shoot wall.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Like reference numerals indicate like elements in the drawings. Unless otherwise indicated, elements are not drawn to scale.
The simulation environment 100 may also support multiple participants if needed to meet the needs of a particular training scenario. In many applications, when a 3D virtual environment is implemented, then the present simulator system may be configured to support two participants, each of whom is provided with unique and independent 3D views of the virtual environment generated by the system. In applications where a 2D virtual environment is implemented, then a configuration may be utilized that may support up to four participants, each of whom is provided with independent 2D views of the virtual environment generated by the system. Discussion of the configurations used to support multiple participants is provided in more detail below.
As shown in
A simulation display screen 120 is also supported in the environment 100. The display screen 120 provides a dynamic view 125 of the virtual environment that is generated by the simulator system. Typically a video projector is used to project the view 125 onto the display screen 120, although direct view systems using flat panel emissive displays can also be utilized in some applications. In
The simulation environment 100 shown in
In some implementations of CAVE, the display screens 2051, 2 . . . 4 enclose a space that is approximately 10 feet wide, 10 feet long, and 8 feet high, however, other dimensions may also be utilized as may be required by a particular implementation. The CAVE paradigm has also been applied to fifth and/or sixth display screens (i.e., the rear wall and ceiling) to provide simulations that may be even more encompassing for the participant 105. Video projectors 2101, 2 . . . 4 may be used to project appropriate portions of the virtual environment onto the corresponding display screens 2051, 2 . . . 4. In some CAVE simulators, the virtual environment is projected stereoscopically to support 3D observations for the participant 105 and interactive experiences with substantially full-scale images.
As shown in
The positions are defined by six degrees-of-freedom (“dof”), as depicted by the coordinate system 400 shown in
Returning again to
In this illustrative example, the video cameras 305 may be configured as part of a reflective optical motion capture system. As shown in
In this illustrative example, the markers 705 are used to dynamically track the position and orientation of the participant's head during interaction with a simulation. Head position is generally well correlated to gaze direction of the participant 105. In other words, knowledge of the motion and position of the participant's head enables an accurate inference to be drawn as to what or who the participant is looking at within the virtual environment. In alternative implementations, additional markers may be applied to the participant, for example, using a body suit, harness, or similar device, to enable full body tracking within the capture volume 115. Real time full body tracking can typically be expected to consume more processing cycles and system resources as compared to head tracking, but may be desirable in some applications where, for example, a simulation is operated over distributed simulator infrastructure and avatars of local participants need to be generated for display on remote systems.
In this illustrative example, the object 805 is also configured to support one or more light sources 8151, 2 . . . N that may be selectively user-actuated via a switch 820 that is operatively coupled to the lights, as indicated by line 825. The light sources 815 may be implemented, for example, using IR LEDs that are powered by a power source, such as a battery (not shown), that is internally disposed in the object or arranged as an externally-coupled power pack. The light sources 815 are used to effectuate a relatively low-bandwidth optical communication path for signaling or transmitting data from the object 805 (or from the participant via interaction with the object) within the capture volume 115 using the same optical motion capture system that is utilized to track the position of the participant and object. Advantageously, the light sources 815 implement the signal path without the necessity of additional communications infrastructure such as RF (radio frequency), magnetic sensing, or other equipment. In addition, utilization of an optically-implemented communication path obviates the need for wires, cables, or other tethers that might restrict movement of the participant 105 within the capture volume 115 or otherwise reduce the realism of the simulation.
As with the markers 810, the light sources 815 are rigidly fixed to the object 805 at known locations. The light sources 815 may be located on object 805 both along the long axis as well as off-axis, as shown. The number of light sources 815 utilized and their location on the object 805 can vary by application. Typically, however, at least one light source 815 will be utilized to provide one bit, binary (i.e., on and off) signaling capability.
In operation during a simulation, the participant's actuation of the trigger 920 will activate a light source 915 to signal that the weapon has been virtually fired. In some cases, different light activation patterns can signal different types of discharge patterns such as a single round per trigger pull, 3-round burst per trigger pull, fully automatic fire with a trigger pull, and the like. Such patterns can be implemented, for example, by various flash patterns using a single light source or multiple light sources 915. Activation of the light source 915 will be detected by one or more of the video cameras 305 (
Eye-specific views can be generated by configuring the left-eye and right-eye lenses (as respectively indicated by reference numerals 1010 and 1015) as LCD (liquid crystal display) shutter lenses. Liquid crystal display shutter lenses are also known as “flicker glasses.” Each shutter lens contains a liquid crystal layer that alternately goes dark or transparent with the respective application and absence of a voltage. The voltage is controlled by a timing signal received at the glasses 1005 (e.g., via an optical or radio frequency communications link to a remote imaging subsystem or module) that enables the shutter lenses to alternately darken over one eye of the participant 105 and then the other eye in synchronization with the refresh rate of the display screen 120 (
In other implementations of the present simulator system, the glasses 1005 may be configured to decode separate left- and right-eye views by applying polarizing filters to the lenses 1010 and 1015. For example, left- and right-handed circular polarizing filters may be respectively utilized in the lenses. Alternatively, linear polarizing filters may be utilized that are orthogonally oriented in respective lenses. As each lens only passes images having like polarization, stereoscopic imaging can be implemented by projecting two different views (each view being uniquely polarized) that are superimposed onto the display screen 120. In some applications, use of circular polarization may be particularly advantageous to avoid image bleed between left and right views and/or loss of stereoscopic perception that may occur while using linear polarizing filters when the participant's head is tilted to thus misalign the polarization axes of the glasses with the projected display.
The glasses 1005 may alternatively be configured with both shutter and polarizing components. By employing such a configuration, the glasses 1005 can decode and disambiguate among four unique and dynamic points of view of a virtual environment shown on the display screen 120. That is, two unique viewpoints can be supported using synchronous shuttering, and two additional unique views can be supported using polarizing filters. In combination with appropriate generation and projection of a virtual environment on the display screen 120, the four unique views may be used to provide, for example, each of two participants with unique 3D views of the virtual environment, or each of four participants with unique 2D views of the virtual environment.
The provision of a unique dynamic point of view per participant is a feature of the present simulator system that may provide additional realism to a simulation by addressing the issue of parallax distortion that is frequently experienced when interacting with conventional shoot wall simulators. Parallax distortion occurs when a virtual environment is generated and displayed using a point of view that is fixed and does not move during the course of a simulation. In other words, assuming an imaginary camera is used to capture the virtual environment that is displayed on the screen 120 (
This problem is illustrated in
For the environment 1105 to appear realistic when projected onto the display screen 120, the view on the screen would need to appear differently depending on the position of the participant's head in the capture volume 115. That is, the participant 105 would expect the virtual environment to look different as his point of view changes. For example, when the participant 105 is in position “A” (as indicated by reference numeral 1110), his line of sight along line 1115 to the enemy soldier 130 is obscured by the wall 1120. Assuming that the wall 1120 is co-planar with the display screen 120, the dot 1125 shows that the sight line 1115 intersects the front plane of the environment at the wall 1120. By contrast, when the participant 105 moves to position “B” (as indicated by reference numeral 1130), his line of sight 1135 to the enemy soldier 130 is no longer obscured by the wall 1120. Thus, if the modeled environment accurately matches its physical counterpart, the participant could move to look around an obstacle to see if an enemy is hidden behind it.
As shown in
By contrast, application of the principles of the present simulator system enables an accurate and realistic display to be generated and projected by tracking the position of the participant 105 in the capture volume 115. The imaginary camera 1140 is then placed to be coincident with the participant's head so that the captured view of the modeled environment 1105 matches the participant's point of view as he moves through the capture volume 115. This feature is shown in
In addition to supporting the generation and projection of a virtual environment that is dynamically and continuously captured from the participant's point of view as he moves through the capture volume 115, the present simulator system supports additional features which can add to the accuracy and realism of a given simulation. The virtual environment may also be generated so that rendered elements in the environment are responsive to the participant's position in the capture volume. For example, the avatar of the enemy soldier 130 can be rendered so that the soldier's eyes and aim of his weapon track the participant 105. In this way, the avatar 130 realistically appears to be looking at the participant 105 and the avatar's gaze will dynamically change in response to the participant's motion. This enhanced realism is in contrast to simulations supported by conventional simulators where avatars typically appear to stare into space or in an odd direction when supposedly attempting to look at or aim a weapon at a participant.
Tracking the position of the weapon 110 (
As shown in
A parallax angle p between the actual trajectory 1305 and the perpendicular trajectory 1310 is thus created. Conventional simulators will typically rely on simple 3D scenarios where elements in the modeled environment 1105 do not extend deeply past the plane of the shoot wall in order to minimize the impact of the parallax. Thus, as shown in
As shown in
A camera module 1510 is utilized to abstract the functionality provided by the video cameras 305 (
A head tracking module 1515 is also included in the simulator system 1505. In this illustrative example, head tracking alone is utilized in order to minimize the resource costs and latency that is typically associated with full body tracking However, in alternative implementations, full body tracking and motion capture may be utilized. The head tracking module 1515 uses images of the helmet markers captured by the camera module 1510 in order to triangulate the position of the participant's head within the capture volume 115 as a given simulation unfolds and the participant moves throughout the volume.
Similarly, an object tracking module 1520 is included in the simulator system 1505 which uses images of the weapon markers captured by the camera module 1510 to triangulate the position of the weapon within the capture volume 115 and detect trigger pulls. For both head tracking and object tracking, the position determination is performed substantially in real time to minimize latency as the simulator system generates and renders the virtual environment. Minimization of latency can typically be expected to increase the realism and immersion of the simulation. In some cases, the head tracking and object tracking modules can be combined into a single module as indicated by dashed line 1525 in
The simulator system 1505 further supports the utilization of a virtual environment generation module 1530. This module is responsible for generating a virtual environment responsive to the needs of a given simulation. In addition, module 1530 will generate a virtual environment while correcting for point of view parallax distortion and trajectory parallax, as respectively indicated by reference numerals 1535 and 1540. That is, the virtual environment generation module 1530 will dynamically generate one or more views of a virtual environment that are consistent with the participant's respective and unique points of view. As noted above, up to four unique views may be generated and rendered depending on the configuration of the glasses 1005 (
A virtual environment rendering module 1545 is utilized in the simulator system 1505 to take the generated virtual environment and pass it off in an appropriate format for projection or display on the display screen 120. As described above, multiple views and/or multiple screens may be utilized as needed to meet the requirements of a particular implementation. Other hardware may be abstracted in a hardware abstraction layer 1550 in some cases in order for the simulator system 1505 to implement the necessary interfaces with various other hardware components that may be needed to implement a given simulation. For example, various other types of peripheral equipment may be supported in a simulation, or interfaces may need to be maintained to support the simulator system 1505 across multiple platforms in a distributed computing arrangement.
The participant's point of view is determined, at block 1620, in response to the head tracking At block 1625, the gaze direction of one or more avatars 130 in the simulation will be determined based on the location of the participant 105 in the capture volume 115. Similarly the direction of the avatar's weapon will be determined, at block 1630, so that the aim of the weapon will track the motion of the participant and thus appear realistic.
At block 1635, the simulator system 1505 will detect weapon fire (and/or detect other communicated data transmitted over the low-bandwidth communication path described above in the text accompanying
Data descriptive of a given simulation scenario is received, as indicated at block 1645. Such data, for example, may be descriptive of the storyline followed in the simulation, express the actions and reactions of the avatars to the participant's commands and/or actions, and the like. At block 1650, using the captured information from the camera module, the various determinations described in blocks 1625 through 1640, and the received simulation data, the virtual environment will be generated using the participant's point of view, having a realistic avatar gaze and weapon direction, and using the actual trajectory for weapon fire. At block 1655, the generated virtual environment will be rendered by projecting or displaying the appropriate views on the display screen 120. At block 1660 control is returned back to the start and the method 1600 is repeated. The rate at which the method repeats can vary by application, however, the various steps of capturing, determining, generating, and rendering will be performed with sufficient frequency to provide a smooth and seamless simulation.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.