The present invention relates to the field of simulation systems for weapons training. More specifically, the present invention relates to scenario authoring and provision in a simulation system.
Due to current world events, there is an urgent need for highly effective law enforcement, security, and military training. Training involves practicing marksmanship skills with lethal and/or non-lethal weapons. Additionally, training involves the development of decision-making skills in situations that are stressful and potentially dangerous. Indeed, perhaps the greatest challenges faced by a trainee are when to use force and how much force to use. If an officer is unprepared to make rapid decisions under the various threats he or she faces, injury to the officer or citizens may result.
Although scenario training is essential for preparing a trainee to react safely with appropriate force and judgment, such training under various real-life situations is a difficult and costly endeavor. Live-fire weapons training may be utilized in firing ranges, but it is inherently dangerous, tightly safety regulated, costly in terms of training ammunition, and firing ranges may not be readily available in all regions. Moreover, live-fire weapons cannot be safely utilized under various real-life situation training.
One technique that has been in use for many years is the utilization of simulation systems to conduct training exercises. Simulation provides a cost effective means of teaching initial weapon handling skills and some decision-making skills, and provides training in real-life situations in which live-fire may be undesirable due to safety or other restrictions.
A conventional simulation system includes a single screen projection system to simulate reality. A trainee views the single screen with video projected thereon, and must decide whether to shoot or not to shoot at the subject. The weapon utilized in a simulation system typically employs a laser beam or light energy to simulate firearm operation and to indicate simulated projectile impact locations on a target.
Single screen simulators utilize technology which restricts realism in tactical training situations and restricts the ability for thorough performance measurements. For example, in reality, lethal threats can come from any direction or from multiple directions. Unfortunately, a conventional single screen simulator does not expand or stimulate a trainee's awareness to these multi-directional threats because the trainee is compelled to focus on a situation directly in front of the trainee, as presented on the single screen. Accordingly, many instructors feel that the industry is encouraging “tunnel vision” by having the trainees focus on an 8-10 foot screen directly in front of them.
One simulation system proposes the use of one screen directly in front of the trainee and a second screen directly behind the trainee. This dual screen simulation system simulates the “feel” of multi-directional threats. However, the trainee is not provided with peripheral stimulation in such a dual screen simulation system. Peripheral vision is used for detecting objects and movement outside of the direct line of vision. Accordingly, peripheral vision is highly useful for avoiding threats or situations from the side. The front screen/rear screen simulation system also suffers from the “tunnel vision” problem mentioned above. That is, a trainee does not employ his or her peripheral vision when assessing and reacting to a simulated real-life situation.
In addition, prior art simulation systems utilize projection systems for presenting prerecorded video, and detection cameras for tracking shots fired, that operate at standard video rates and resolution based on National Television Standards Committee (NTSC) for analog television standard. Training scenarios based on NTSC analog television standards suffer from poor realism due to low resolution images that are expanded to fit the large screen of the simulator system. In addition, detection cameras based on NTSC standards suffer from poor tracking accuracy, again due to low resolution.
While effective training can increase the potential for officer safety and can teach better decision-making skills for management of use of force against others, law enforcement, security, and military training managers must devote more and more of their limited resources to equipment purchases and costly training programs. Consequently, the need to provide cost effective, yet highly realistic, simulation systems for situational response training in austere budget times has presented additional challenges to the simulation system community.
Accordingly, what is needed is a simulation system that provides realistic, multi-directional threats for situational response training. In addition, what is needed is a simulation system that includes that ability for high accuracy trainee performance measurements. Moreover, the simulation system should support a number of configurations and should be cost effective.
It is an advantage of the present invention that a simulation system is provided for situational response training.
It is another advantage of the present invention is that a simulation system is provided in which a trainee can face multiple risks from different directions, thus encouraging teamwork and reinforcing the use of appropriate tactics.
Another advantage of the present invention is that a simulation system is provided having realistic scenarios in which a trainee may practice observation techniques, practice time-critical judgment and target identification, and improve decision-making skills.
Yet another advantage of the present invention is that a cost-effective simulation system is provided that can be configured to enable situational response training, marksmanship training, and/or can be utilized for weapons qualification testing.
The above and other advantages of the present invention are carried out in one form by a simulation system. The simulation system includes a first screen for displaying a first view of a scenario, and a second screen for displaying a second view of the scenario. The first and second views of the scenario occur at a same instant, and the scenario is a visually presented situation. The simulation system further includes a device for selective actuation toward a target within the scenario displayed on the first and second screens, a detection subsystem for detecting an actuation of the device toward the first and second screens, and a processor in communication with the detection subsystem for receiving information associated with the actuation of the device and processing the received information to evaluate user response to the situation.
The above and other advantages of the present invention are carried out in another form by a method of training a participant utilizing a simulation system, the participant being enabled to selectively actuate a device toward a target. The method calls for displaying a first view of a scenario on a first screen of the simulation system and displaying a second view of the scenario on a second screen of the simulation system. The first and second views of the scenario occur at a same instant, the scenario is prerecorded video of a situation, and the first and second views are adjacent portions of the prerecorded video. The method further calls for detecting an actuation of the device toward a target within the scenario displayed on the first and second screens, and evaluating user response to the situation in response to the actuation of the device.
A more complete understanding of the present invention may be derived by referring to the detailed description and claims when considered in connection with the Figures, wherein like reference numbers refer to similar items throughout the Figures, and:
a-d show an illustration of a single frame of an exemplary video clip undergoing video filming and editing;
Each of multiple screens 22 has a rear projection system 28 associated therewith. Rear projection system 28 is operable, and trainees 26 actions may be monitored from, a workstation 30 located remote from participation location 24. Workstation 30 is illustrated as being positioned proximate screens 22. However, is should be understood that workstation 30 need not be proximate screens 22, but may instead be located more distantly, for example, in another room. When workstation 30 is located in another room, bi-directional audio may be provided for communication between trainees 26 and trainers located at workstation 30. In addition, video monitoring of participation location 24 may be provided to the trainer located at workstation 30.
Full surround simulation system 20 includes a total of six screens 22 arranged such that an angle 27 formed between corresponding faces 29 of screens 22 is approximately one hundred and twenty degrees. As such, the six screens 22 are arranged in a hexagonal pattern. In addition, each of screens 22 may be approximately ten feet wide by seven and a half feet high. Of course, those skilled in the art will recognize that other sizes of screens 22 may be provided. For example, a twelve foot wide by six foot nine inch high screen may be utilized for high definition formatted video. Thus, the configuration of simulation system 20 provides a multi-directional simulated environment in which a situation, or event, is unfolding. Although screens 22 are shown as being generally flat, the present invention may be adapted to include screens 22 that are curved. In such a configuration, screens 22 would form a generally circular pattern rather than the illustrated hexagonal pattern.
Full surround simulation system 20 provides a visually presented situation onto each of screens 22 so that trainees 26 in participation location 24 are fully immersed in the situation. In such a configuration, trainees 26 can train to respond to peripheral visual cues, multi-directional auditory cues, and the like. In a preferred embodiment, the visually presented situation is full motion, pre-recorded video. However, it should be understood that other techniques may be employed such as, video overlay, computer generated imagery, and the like.
The situation presented by simulation system 20 is pertinent to the type of training and the trainees 26 participating in the training experience. Trainees 26 may be law enforcement, security, military personnel, and the like. Accordingly, training scenarios projected via rear projection system 28 onto associated screens 22 correspond to real life situations in which trainees 26 might find themselves. For example, law enforcement scenarios could include response to shots fired at a facility, domestic disputes, hostage situations, and so forth. Security scenarios might include action in a crowded airport departure/arrival terminal, the jet way, or in an aircraft. Military scenarios could include training for a pending mission, a combat situation, an ambush, and so forth.
Trainees 26 are provided with a weapon 31. Weapon 31 may be implemented by any firearm (i.e., hand-gun, rifle, shotgun, etc.) and/or a non-lethal weapon (i.e., pepper spray, tear gas, stun gun, etc.) that may be utilized by trainees 26 in the course of duty. However, for purposes of the simulation, weapon 31 is equipped with a laser insert instead of actual ammunition. Trainees 26 actuate weapon 31 to selectively project a laser beam, represented by an arrow 33, toward any of screens 22 in response to the situation presented by simulation system 20. In a preferred embodiment, weapon 31 is a laser device that projects infra red (IR) light, although a visible red laser device may also be used. Alternatively, other non-live fire weaponry and/or live-fire weaponry may be employed.
Referring to
Each rear projection system 28 includes a projector 38 having a video input 40 in communication with a video output 42 of its respective projection controller 36, and a sound device, i.e., a speaker 44, having an audio input 46 in communication with an audio output 48 of its respective projection controller 36. Each rear projection system 28 further includes a detector 50, in communication with tracking processor 34 via a high speed serial bus 51. Thus, the collection of detectors 50 defines a detection subsystem of simulation system 20. Projector 38 and detector 50 face a mirror 52 of rear projection system 28.
In general, simulation controller 32 may include a scenario pointer database 54 that is an index to a number of scenarios (discussed below) that are prerecorded full motion video of various situations that are to be presented to trainees 26. In addition, each of projection controllers 36 may include a scenario library 56 pertinent to their location within simulation system 20. Each scenario library 56 includes a portion of the video and audio to be presented via the associated one of projectors 38 and speakers 44.
An operator at workstation 30 selects one of the scenarios to present to trainees 26 and simulation controller 32 accesses scenario pointer database 54 to index to the appropriate video identifiers (discussed below) that correspond to the scenario to be presented. Simulation controller 32 then commands each of projection controllers 36 to concurrently present corresponding video, represented by an arrow 58, and any associated audio, represented by arced lines 60.
Video 58 is projected toward a reflective surface 62 of mirror 52 where video 58 is thus reflected onto screen 22 in accordance with conventional rear projection methodology. Depending upon the scenario, trainee 26 may elect to shoot his or her weapon 31, i.e. project laser beam 33, toward an intended target within the scenario. An impact location (discussed below) of laser beam 33 is detected by detector 50 via reflective surface 62 of mirror 52 when laser beam 33 is projected onto screen 22. Information regarding the impact location is subsequently communicated to tracking processor 34 to evaluate trainee response to the presented scenario (discussed below). Optionally, trainee response may then be concatenated into a report 64.
Simulation controller 32 is a conventional computing system that includes, for example, input devices (keyboard, mouse, etc.), output devices (monitor, printers, etc.), a data reader, memory, programs stored in memory, and so forth. Simulation controller 32 and projection controllers 36 operate under a primary/secondary computer networking communication protocol in which simulation controller 32 (the primary device) controls projection controllers (the secondary devices).
Simulation system 20, illustrated in
In a preferred embodiment, each of projectors 38 is capable of playing high definition video. The term “high definition” refers to being or relating to a television system that has twice as many scan lines per frame as a conventional system, a proportionally sharper image, and a wide-screen format. The high-definition format uses a 16:9 aspect ratio (an image's width divided by its height), although the 4:3 aspect ratio of conventional television may also be used. The high resolution images (1024×768 or 1280×720) allow much more detail to be shown. Simulator system 20 places trainees 26 close to screens 22, so that trainees 26 can see more detail. Consequently, the high resolution video images are advantageously utilized to provide more realistic imagery to trainees 26. Although the present invention is described in terms of it's use with known high definition video formats, the present invention may further be adapted for future higher resolution video formats.
In a preferred embodiment, each of detectors 50 is an Institute of Electrical and Electronics Engineers (IEEE) 1394-compliant digital video camera in communication with tracking processor 34 via high speed serial bus 51. IEEE 1394 is a digital video serial bus interface standard that offers high-speed communications and isochronous real-time data services. An IEEE 1394 system is advantageously used in place of the more common universal serial bus (USB) due to its faster speed. However, those skilled in the art will recognize that existing and upcoming standards that offer high-speed communications, such as USB 2.0, may alternatively be employed.
Each of detectors 50 further includes an infrared (IR) filter 66 removably covering a lens 68 of detector 50. IR filter 66 may be hingedly affixed to detector 50 or may be pivotally affixed to detector 50. IR filter 66 covers lens 68 when simulator system 20 is functioning so as to accurately detect the impact location of laser beam 33 (
A first projector 38′ is situated at a second end 76 of frame structure 70 at a distance, d, from first reflective surface 62′ of first mirror 52′. First projector 38′ is preferably equipped with an adjustment mechanism which can be employed to adjust first projector 38′ so that a center of a first view 78 of the projected video 58 (
The utilization of first rear projection system 28′ in simulation system 20 (
The relationship of components on frame structure 70 simplifies system configuration and calibration, and makes adjusting of first projector 38′ simpler. As shown, frame structure 70 further includes casters 82 mounted to a bottom thereof. Through the use of casters 82, simulation system 20 (
In firing range configuration 84, screens 22 are arranged such that corresponding viewing faces 86 of screens 22 are aligned to be substantially coplanar. Additionally, rear projection systems 28 are readily repositioned behind the aligned screens 22 via casters 82 (
Exemplary scenario pointer database 54 includes four exemplary scenarios 86, labeled “1”, “2”, “3”, and “4”, and referenced in a scenario identifier field 87. Each of scenarios 86 is pre-recorded video 58 corresponding to a real life situation in which trainees 26 might find themselves, as discussed above. In addition, each of scenarios 86, is split into adjacent portions, i.e., adjacent views 88, referenced in a video index identifier field 90, and assigned to particular projection controllers 36, referenced in a projection controller identifier field 92. For example, a first projection controller 36′ is assigned a first view 88′, identified in video index identifier field 90, by the label 1-1. Similarly, a second projection controller 36″ is assigned a second view 88″, identified in video index identifier field 90, by 1-2.
In a preferred embodiment, pre-recorded video 58 may be readily filmed utilizing multiple high-definition format cameras with lenses outwardly directed from the same location, or a compound motion picture camera, in order to achieve a 360-degree field-of-view. Post-production processing entails stitching, or seaming, the individual views to form a panoramic view. The panoramic view is subsequently split into adjacent views 88 that are presented, via rear projection systems 28 (
The video is desirably split so that the primary subject or subjects of interest in the video is not split over adjacent screens 22. The splitting of video into adjacent views 88 for presentation on adjacent screens 88 need not be a one to one correlation. For example, during post-production processing a stitched panoramic video having a 270-degree field-of-view may be projected onto five screens to yield a 300-degree field-of-view.
Audio 60 may simply be recorded at the time of video production. During post-production processing, particular portions of the audio are assigned to particular slices of the video so that audio relevant to the view is provided. For example, audio 60 (
Although one video and audio production technique is described above that cost-effectively yields a high resolution emulation of a real-life situation, it should be apparent that other video and audio production techniques may be employed. For example, the pre-recorded video may be filmed utilizing a digital camera system having a lens system that can record 360-degree video. Post-production processing then merely entails splitting the 360-degree video into adjacent views to be presented on adjacent screens. Similarly, audio may be produced utilizing one of several surround sound techniques known to those skilled in the art.
Simulation system 20 (
Referring to
In addition, second scenario 86″ shows that following initiation of first subscenario 94′, another branching decision 98 may be required. When no branching is to occur at branching decision 98, first subscenario 94′ continues. Alternatively, when branching is to occur at branching decision 98, a second subscenario 94″, labeled 2C is presented. Following the completion of second scenario 86″, first subscenario 94′, or second subscenario 96″, video playback process 93 for second scenario 86″ is finished.
An exemplary scenario 86 in which video branching might occur is as follows: detectors 50 (
Referring back to
The present invention contemplates the provision of custom authoring capability of scenarios 86 to the training organization. To that end, scenario creation software permits a scenario developer to construct situations that can be displayed on screens 22 from “stock” footage without the demands to perform extensive camera work. In a preferred embodiment, the scenario creation software employs a technique known as compositing. Compositing is the post-production combination of two or more video/film/digital clips into a single image.
In compositing, two images (or clips) are combined in one of several ways using a mask. The most common way is to place one image (the foreground) over another (the background). Where the mask indicates transparency, the background image will show through the foreground. Blue/green screening, also known as chroma keying is a type of compositing where the mask is calculated from the foreground image. Where the image is blue (or green for green screen), the mask is considered to be transparent. This technique is useful when shooting film and video, as a blue or green screen can be placed behind the object being shot and some other image then inserted in that space later.
The scenario creation software provides the scenario developer with a library of background still and/or motion images. These background images are desirably panoramic images, so that one large picture is continued from one view on one of screens 22 (
The scenario creation software then enables the scenario developer to display the background image with various foreground clips to form the scenario. In addition, the scenario developer may optionally determine the “logic” behind when and where the clips may appear. For example, the scenario developer could determine that foreground image “A” is to appear at a predetermined and/or random time. In addition, the scenario developer may add “hit zones” to the clips. These “hit zones” are areas where the clip would branch due to interaction by the user. The scenario developer can instruct the scenario to branch to clip “C” if a “hit zone” was activated on clip “B”.
Through the use of scenario creation software, the software developer is enabled to add, modify, and subtract video clips, still images, and/or audio clips to or from the scenario that they are creating. The scenario developer may then be able to preview and test their scenario during the scenario creation process. Once the scenario developer is satisfied with the content, the scenario creation software can create the files needed by simulation system 20 (
Trainee 26 responded to perceived aggressive behavior exhibited by subject 102 with the force that he or she deemed to be reasonably necessary during the course of the situation unfolding within scenario 86. As discussed previously, detector 50 (
The 180-degree field of view enables trainee 26 to utilize peripheral visual and auditory cues. However space and cost savings is realized relative to full surround simulation system 106. Space savings is realized because the overall footprint of half surround simulation system 106 is approximately half that of full surround simulation system 20, and cost savings is realized by utilizing a smaller number of components.
System 108 is further shown as including a remote debrief station 111. Remote debrief station 111 may be located in a different room, as represented by dashed lines 113. Station 111 is in communication with workstation 30, and more particularly with tracking processor 34 (
Although each of the simulation systems of
Referring to
Training process 112 presents one of scenarios 86 (
Training process 112 begins with a task 114. At task 114, an operator calibrates simulation system 20. As such, calibration task 114 is a preliminary activity that can occur prior to positioning trainee 26 within participation location 24 of simulation system 20. Calibration task 114 is employed to calibrate each of detectors 50 with their associated projectors 38. In addition, calibration task 114 may be employed to calibrate, i.e., zero, weapon 31 relative to projectors 38.
Referring to
To that end, at calibration task 114, IR filter 66 (
With reference back to
In conjunction with task 118, a query task 120 determines whether laser beam 33 is detected on one of screens 22. That is, at query task 120, each of detectors 50 monitors for laser beam 33 projected on one of screens 22 in response to actuation of weapon 31. When one of detectors 50 detects laser beam 33, this information is communicated to tracking processor 34 (
When laser beam 33 is detected at query task 120, process flow proceeds to a task 122. At task 122, tracking processor 34 determines coordinates describing impact location 100 (
Following task 122, or alternatively, when laser beam 33 is not detected at query task 120, process flow proceeds to query task 124. Query task 124 determines whether to branch to one of subscenarios 94 (
Process 112 proceeds to a task 126 when a determination is made at query task 124 to branch to one of subscenarios 94. At task 126, simulation controller 32 commands projection controllers 36 (
When query task 124 determines not to branch to one of subscenarios 94, process 112 continues with a query task 128. Query task 128 determines whether playback of scenario 86 is complete. When playback of scenario 86 is not complete, program control loops back to query task 120 to continue monitoring for laser beam 31. Thus, training process 112 allows for the capability of detecting multiple shots fired from weapon 31. Alternatively, when playback of scenario 86 is complete, process control proceeds to a query task 130 (discussed below).
Referring back to task 126, a query task 132 is performed in conjunction with task 126. Query task 132 determines whether laser beam 33 is detected on one of screens 22 in response to the presentation of subscenario 94. When one of detectors 50 detects laser beam 33, this information is communicated to tracking processor 34 (
When laser beam 33 is detected at query task 132, process flow proceeds to a task 134. At task 134, tracking processor 34 determines coordinates describing impact location 100 (
Following task 134, or alternatively, when laser beam 33 is not detected at query task 132, process flow proceeds to query task 136. Query task 136 determines whether to branch to another one of subscenarios 94 (
Process 112 loops back to task 126 when a determination is made at query task 136 to branch to another one of subscenarios 94. The next one of subscenarios 94 is subsequently displayed, and detectors 50 continue to monitor for laser beam 31. However, when query task 136 determines not to branch to another one of subscenarios 94, process 112 continues with a query task 138.
Query task 138 determines whether playback of subscenario 94 is complete. When playback of subscenario 94 is incomplete, program control loops back to query task 132 to continue monitoring for laser beam 31. Alternatively, when playback of subscenario 94 is complete, process control proceeds to query task 130.
Following completion of playback of either of scenario 86, determined at query task 128, or completion of playback of subscenario 94, determined at query task 138, query task 130 determines whether report 64 (
At task 140, report 64 is provided. In an exemplary embodiment, tracking processor 34 (
Following task 140, training process 112 exits. Of course, it should be apparent that training process 112 can be optionally repeated utilizing the same one of scenarios 86 or another one of scenario 86.
Training process 112 describes methodology associated with situational response training for honing a trainee's decision-making skills in situations that are stressful and potentially dangerous. Of course, as discussed above, a comprehensive training program may also encompass marksmanship training and/or weapons qualification testing. Full surround simulation system 20 may be configured for marksmanship training and weapons qualification testing, as discussed in connection with
Targets 146 presented on first screen 22′ via one of projectors 38 (not shown) are proportionately correct and sized to fit within small viewing area 142. Thus, the size of targets 146 may be reduced by fifty percent relative to their appearance when zoomed out. As shown, there may be multiple targets 146 presented on first screen 22′. Additional information pertinent to qualification testing may also be provided on first screen 22′. This additional information may include, for example, distance to the target (for example, 75 meters), wind speed (for example, 5 mph), and so forth. In addition, an operator may optionally enter, via workstation 30, information for use by a software ballistic calculator to compute, for example, the effects of wind, barometric pressure, altitude, bullet characteristics, and for forth, on the location of a “shot” fired toward targets 146.
Report 64 (
In contrast to the aforementioned simulation systems, simulation system 150 utilizes a non-laser-based weapon 152. Like weapon 31 (
In a preferred embodiment, tracking markers 154 are reflective markers coupled to weapon 152 that are detectable by tracking cameras 156. Thus, tracking cameras 156 can continuously track the movement of weapon 152. Continuous tracking of weapon 152 provides ready “aim trace” where the position of weapon 152 (or even trainee 26) can be monitored and then replayed during a debrief. Reflective tracking markers 154 require no power, and tracking cameras 156 can track movement of weapon 152 in three dimensions, as opposed to two dimensions for projected laser beam tracking. In addition, reflective tracking is not affected by metal objects in close proximity, and reflective tracking operates at a very high update rate.
Accurate reflective tracking calls for a minimum of two reflective markers 154 per weapon 152 and at least three tracking cameras 156, although four to six tracking cameras 156 are preferred. Each of tracking cameras 156 emits light (often infrared-light) directly next to the lens of tracking camera 156. Reflective tracking markers 154 then reflect the light back to tracking cameras 156. A tracking processor (not shown) at workstation 30 then performs various calculations and combines each view from tracking cameras 156 to create a highly accurate three-dimensional position for weapon 152. Of course, as known to those skilled in the art a calibration process is required for both tracking cameras 156 and weapon 152, and if any of tracking cameras 156 are moved or bumped, simulation system 150 should be recalibrated.
Weapon 152 may be a pistol, for example, loaded with blank rounds. Actuation of weapon 152 is thus detectable by tracking cameras 156 as a sudden movement of tracking markers 154 caused by the recoil of weapon 152 in a direction opposite from the direction of the “shot” fired, as signified by a bi-directional arrow 158. By using such a technique, multiple weapons 152 can be tracked in participation location 24, and the position of weapons 152, as well as the projection of where a “shot” fired would go, can be calculated with high accuracy. Additional markers 154 may optionally be coupled to trainee 26, for example, on the head region to track trainee 26 movement and to correlate the movement of trainee 26 with the presented scenario.
If weapon 152 is one that does not typically recoil when actuated, weapon 152 could further be configured to transmit a signal, via a wired or wireless link, indicating actuation of weapon 152. Alternatively, a weapon may be adapted to include both a laser insert and tracking markers, both of which may be employed to detect actuation of the weapon.
Traditional training authoring software for instructional use-of-force training and military simulation can provide three-dimensional components. That is, conventional authoring software enables the manipulation of three-dimensional geometry that represents, for example, human beings. However, due to current technological limitations, computer-generated human characters lack realism in both look and movement, especially in real-time applications. If a trainee believes they are shooting a non-person, rather than an actual person, they may be more likely to use deadly force, even when deadly force is unwarranted. Consequently, a trainee having trained with video game-like “cartoon” characters, may overreact when faced with minimal or non-threats. Similarly, the trainee may be less effective against real threats.
Other current training approaches utilize interactive full-frame video. This type of video can provide very realistic human look and movement, at least on single screen applications. However, simulations based on full-frame video have limitations with respect to branching because the producers of such content must film every possible branch that may be needed during the simulation. In a practical setting, this means that training courseware becomes increasingly difficult to film as additional threats (i.e., characters) are added. The usual practice is to set up a branching point within the video, then further down the timeline, set up another branching point. This effectively limits the number of characters “on-screen” at any one time to usually a maximum of one or two. Moreover, such video has limited ability for reuse since the actions of the actors are not independent from the background. For video-based applications within the multi-screen simulation systems described above, these limitations are unacceptable.
As discussed in detail below, the scenario creation code permits a scenario developer to construct situations that can be displayed on screens 22 (
Computing system 200 includes a processor 204 on which the methods according to the invention can be practiced. Processor 204 is in communication with a data input 206, a display 208, and a memory 210 for storing at least one scenario 211 (discussed below) generated in response to the execution of scenario provision process 202. These elements are interconnected by a bus structure 212.
Data input 206 can encompass a keyboard, mouse, pointing device, and the like for user-provided input to processor 204. Display 208 provides output from processor 204 in response to execution of scenario provision process 202. Computing system 200 can also include network connections, modems, or other devices used for communications with other computer systems or devices.
Computing system 200 further includes a computer-readable storage medium 214. Computer-readable storage medium 214 may be a magnetic disc, optical disc, or any other volatile or non-volatile mass storage system readable by processor 204. Scenario provision process 202 is executable code recorded on computer-readable storage medium 214 for instructing processor 204 to create scenario 211 for interactive use in a scenario playback system for visualization and interactive use by trainees 26 (
Scenario provision process 202 begins with a task 216. At task 216, process 202 is initiated. Initiation of process 202 occurs by conventional program start-up techniques and yields the presentation of a main window on display 208 (
Referring to
Referring to
Referring to
Background images 234 may be chosen from those provided within list 233 stored in database 203 (
The scenario author may utilize a conventional pointer 248 to point to one of background images 234. A short description 250, in the form of text and/or a thumbnail image, may optionally be presented at the bottom of library window 224 to assist the scenario author in his or her choice of one of background images 234. Once the scenario author has chosen one of background images 234, the scenario author can utilize a conventional drag-and-drop technique by clicking on one of background images 234 and dragging it into scenario layout window 222 (
Referring to
Referring back to scenario provision process 202 (
Referring to
Actors 266 may be chosen from those provided within list 264 stored in database 203 (
At task 260 of process 202, the scenario author may utilize pointer 248 to point to one of actors 266, for example a first actor 266′, labeled.“Offender 1”. A short description 270, in the form of text and/or a thumbnail image, may optionally be presented at the bottom of library window 224 to assist the scenario author in his or her selection of one of actors 266.
With reference back to process 202 (
Referring to
In accordance with a preferred embodiment of the present invention, each of behaviors 278 within list 276 is the aggregate of actions and/or movements made by an object irrespective of the situation. Behaviors 278 within list 276 are not linked with particular actors 266 (
List 264 (
Referring now to
Although the above description indicates the selection of one of actors 266 and the subsequent assignment of one of behaviors 278 to the selected actor 266, it should be understood that the present invention enables the opposite occurrence. For example, the scenario author may select one of behaviors 278 from list 276. In response, a drop-down menu may appear that includes a subset of actors 266 from list 264 (
With reference back to scenario provision process 202 (
In a preferred embodiment, the scenario author can utilize a conventional drag-and-drop technique by clicking on first actor 266′ and dragging it into scenario layout window 222. By utilizing the drag-and drop technique, the scenario author can determine a location within first background image 234′ in which the author wishes first actor 266′ to appear. Those skilled in the art will recognize that other conventional techniques, rather than drag-and-drop, may be employed for choosing one of actors 266 and placing it within scenario layout window 222.
In addition, the scenario author can resize first actor 266′ relative to first background image 234′ to characterize a distance of first actor 266′ from trainee 26 (
Following combining task 288, scenario provision process 202 proceeds to a query task 290. At query task 290, the scenario author determines whether scenario 211 is to include another one of actors 266 (
It should be noted that both first and second actors 266′ and 266″ appear to be behind portions of first background image 234′. For example, first actor 266′ appears to be partially hidden by a rock 296, and second actor 266″ appears to be partially hidden by shrubbery 298. During a background editing process, portions of first background image 234′ can be specified as foreground layers. Thus rock 296 and shrubbery 298 are each defined as a foreground layer within first background image 234′. When regions within a background image are defined as foreground layers, these foreground layers will overlay the mask portion of the video clips corresponding to first and second actors 266′ and 266″. This layering feature is described in greater detail in connection with background editing of
With reference back to
Referring to
Table 304 includes a “start point” symbol 308, an “external command” symbol 310, a “trigger” symbol 312, an “event” symbol 314, and “actor/behavior” symbol 316, an “ambient sound” symbol 318, and a “delay” symbol 320. Symbols 306 are provided herein for illustrative purposes. Those skilled in the art will recognize that symbols 306 could take on a great variety of shapes. Alternatively, color coding could be utilized to differentiate the various symbols.
As shown in
Interactive buttons within scenario logic window 226 can include an “external command” button 324, a “timer” button 326, and a “sound” button 328. External command symbol 310 is created in scenario logic window 226 when the scenario author clicks on external command button 324. External commands are interactions that may be created within scenario logic flow 322 that occur from outside of simulation system 108 (
Delay symbol 320 is created in scenario logic window 226 when the scenario author clicks on timer button 326. The use of timer button 326 allows the scenario author to input a time delay into scenario logic flow 322. Appropriate text may appear in, for example, properties window 228 of main window when delay symbol 320 is created in scenario logic window 226. This text can allow the author to enter a duration of the delay, or can allow the author to select from a number of pre-determined durations of the delay.
Ambient sound symbol 318 is created in scenario logic window 226 when the scenario author clicks on sound button 328. The use of sound button 328 allows the scenario author to input ambient sound into scenario logic flow 322. Text may appear in, for example, properties window 228 of main window when ambient sound symbol 320 is created in scenario logic window 226. This text may be a list of sound files that are stored within database 203 (
Trigger symbol 312 within scenario logic flow 322 represents notification to actor/behavior symbol 316 that something has occurred. Whereas, event symbol 314 within scenario logic flow 322 represents an occurrence of something within an actor's behavior that will cause a reaction within scenario logic flow 322. In this exemplary embodiment, trigger symbol 312 and event symbol 314 can be generated when the scenario author “right clicks” on actor/behavior symbol 316.
Referring back to
Scenario logic flow 322 describes a “script” for scenario 211 (
Scenario logic 322 is highly simplified for clarity of understanding. However, in general it should be understood that scenario logic can be generated such that the behavior of a first actor can effect the behavior of a second actor and/or that an external command can effect the behavior of either of the actors. The behaviors of the actors can also be affected by interaction of trainee 26 within scenario 211. This interaction can occur at the behavior level of the actors, and is described in greater detail in connection with
Returning to
Scenario provision process 202 includes ellipses 348 separating scenario save task 344 and scenario display task 346. Ellipses 348 indicate an omission of standard processing tasks for simplicity of illustration. These processing tasks may include saving scenario 211 in a format compatible for playback at simulation system 108, writing scenario 211 to a storage medium that is readable by simulation system 108, conveying scenario 211 to simulation system 108, and so forth. Following task 346, scenario provision process 202 exits.
Referring to
As mentioned briefly above, background images 234 (
Interactive buttons within background editor window 352 include a “load panoramic” button 362, a “pan” button 364, and a “layer” button 366. Load panoramic button 362 allows a user to browse within computing system 200 (
As illustrated in
As illustrated in
As illustrated in
In the context of the following description, animation sequences 384 are the scripted actions that any of actors 266 may perform. Video clips 386 may be recorded of actors 266 performing animation sequences 384 against a blue or green screen. Information regarding video clips 386 are subsequently recorded in association with one of actors 266. In addition, video clips 386 are distinguished by identifiers 388, such as a frame number sequence, in table 382 characterizing one of animation sequences 384. Thus, video clips 386 portray actors 266 performing particular animation sequences 384.
A logical grouping of animation sequences 384 defines one of behaviors 278 (
a-d show an illustration of a single frame 390 of an exemplary one of video clips 386 undergoing video filming and editing. Motion picture video filming may be performed utilizing a standard or high definition video camera. Video editing may be performed utilizing video editing software for generating digital “masks” of the actor's performance. Those skilled in the art will recognize that video clips 386 contain many more than a single frame. However, only a single frame 390 is shown to illustrate post production processing that may occur to generate video clips 386 for use with scenario provision process 202 (
At
Zones 398 can be computed using matte 393, i.e., the alpha channel, as a starting point. For example, in the area of frame 390 where the opacity exceeds approximately ninety-five percent, i.e., mask portion 394, it can be assumed that the image asset, i.e. first actor 266′, is “solid” and therefore can be hit by a bullet. Any less opacity will cause the bullet to “miss” and hit the next object in the path of the bullet. This hit zone information can be enhanced by adding different types of zones 398 to different areas of first actor 266′. For example,
At
Referring to
Like table 304 (
As shown in
A branching options window 420 facilitates generation of behavior logic flow 408. Branching options window 420 includes a number of user interactive buttons. For example, window 420 includes a “branch” button 422, an “event” button 424, a “trigger” button 426, a “random” button 428, and an “option” button 430. In general, selection of branch button 422 allows for a branch to occur within behavior logic flow 408. Selection of event button 424 results in the generation of event symbol 314, and selection of trigger button 426 results in the generation of trigger symbol 312 in behavior logic flow 408.
It is interesting to note that the definition of trigger and event symbols 312 and 314, respectively, when utilized within behavior logic flow 408 differ slightly from their definitions set forth in connection with scenario logic flow 322 (
Selection of random button 428 results in the generation of random symbol 416 in behavior logic flow 408. Similarly selection of option button 430 results in the generation of option symbol 418 in behavior logic flow 408. The introduction of random and/or option symbols 416 and 418, respectively, into behavior logic flow 408 introduces random or unexpected properties to a behavior logic flow. These random or unexpected properties will be discussed in connection with
A properties window 432 allows the selection of animation sequences 384. In addition, properties window 432 allows the behavior author to assign various properties to the selected one of animation sequences 384. These various properties can include, for example, selection of a particular sound associated with a gunshot. When one of animation sequences 384 is generated, animation sequence symbol 414 will appear in behavior editor window 406. The various symbols 412 will be presented in behavior editor window 406 as “floating” or unconnected with regard to any other symbols 412 appearing in window 406 until the behavior creation author creates those connections. Symbols 412 within behavior logic flow 408 are interconnected by arrows 432 to define the various relationships and interactions.
Behavior logic flow 408 describes a “script” for one of behaviors 278 (
The “script” for behavior logic flow 436 is as follows: behavior flow 436 starts (Start point 308) and animation sequence 384 is presented (Duck 414). Next, a random property is introduced (Random 416). The random property (Random 416) allows behavior logic flow 436 to branch to either an optional side logic flow (Side 418) or an optional stand logic flow (Stand 418). Option symbols 418 indicate that logic flow can include either side logic flow, stand logic flow, or both side and stand logic flows when implementing the random property (Random 416).
First reviewing side logic flow (Side 418), animation sequence 384 is presented (From Duck: Side & Shoot 414). This translates to “from the duck position, move sideways and shoot). Next, animation sequence 384 is presented (From Side: Shoot 414), meaning from the sideways position shoot weapon. Next, a random property (Random 416) is introduced. The random property allows behavior logic flow 436 to branch and present either animation sequence 384 (From Side: Shoot 414) or animation sequence 384 (From Side: Shoot & Duck 414).
During any of the three animation sequences, (From Duck: Side & Shoot 414), (From Side: Shoot 414), and (From Side: Shoot & Duck 414), and event can occur (Shot 314). If an event occurs (Shot 314), a trigger is generated (Fall 312), and another animation sequence 384 is presented (From Side: Shoot & Fall). If another event occurs (Shot 314), another trigger is generated (Fall 312), and yet another animation sequence 384 (Twitch 414) is presented. If animation sequence 394 (From Side: Shoot & Duck 414) is presented for a period of time, and no event occurs, i.e., Shot 314 does not occur, behavior logic flow 436 loops back to animation sequence 384 (Duck 414).
Next reviewing stand logic flow (Stand 418), animation sequence 384 is presented (From Duck: Stand & Shoot 414). This translates to “from the duck position, stand up and shoot). Next, animation sequence 384 is presented (From Stand: Shoot and Duck 414), meaning from the standing position, shoot weapon, then duck. If an event associated with animation sequences 384 (From Duck: Stand. & Shoot 414) and (From Stand: Shoot and Duck 414) does not occur, i.e., Shot 314 does not occur, behavior logic flow 436 loops back to animation sequence 384 (Duck 414).
However, during either of the two animation sequences 384 (From Duck: Stand & Shoot 414) and (From Stand: Shoot and Duck 414), and event can occur (Shot 314). If an event occurs (shot 314), a trigger is generated (Fall 312), and another animation sequence 384 is presented (From Stand: Shoot & Fall 414). If another event occurs (Shot 314), another trigger is generated (Fall 312), and yet another animation sequence 384 (Twitch 414) is presented.
Although only two behavior logic flows for behaviors 278 (
In summary, the present invention teaches of a method for scenario provision in a simulation system that utilizes executable code operable on a computing system. The executable code is in the form of a scenario provision process that permits the user to create new scenarios with the importation of sounds and image objects, such as, panoramic pictures, still digital pictures, standard and high-definition video files, green or blue screen video. Green or blue screen based filming provides for extensive reusability of content, as individual “actors” can be filmed and then “dropped” into various settings with various other “actors.” In addition, the program and method permits the user to place the image objects (for example, actor video clips) in a desired location within a background image. The program and method further allows a user to manipulate a panoramic image for use as a background image in a single or multi-screen scenario playback system. The program and method permits the user to assign sounds and image objects to layers so that the user can define what object is displayed in front of or behind another object. In addition, the program and method enables the user to readily construct scenario logic flow defining a scenario through a readily manipulated and understandable flowchart style user interface.
Although the preferred embodiments of the invention have been illustrated and described in detail, it will be readily apparent to those skilled in the art that various modifications may be made therein without departing from the spirit of the invention or from the scope of the appended claims. For example, the process steps discussed and the images provided herein can take on great number of variations and can be performed and shown in a differing order then that which was presented.
The present invention is a continuation in part (CIP) of “Multiple Screen Simulation System and Method for Situational Response Training,” U.S. patent application Ser. No. 10/800,942, filed 15 Mar. 2004, which is incorporated by reference herein. In addition, the present invention claims priority under 35 U.S.C. §119(e) to: “Video Hybrid Computer-Generated Imaging Software,” U.S. Provisional Patent Application Ser. No. 60/633,087, filed 3 Dec. 2004, which is incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
60633087 | Dec 2004 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10800942 | Mar 2004 | US |
Child | 11286124 | Nov 2005 | US |