A portion of the disclosure of this patent document contains material, which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
This application claims the priority of U.S. Provisional Application No. 62/990,097, entitled “DYNAMIC SCENARIO CREATION IN VIRTUAL REALITY SIMULATION SYSTEMS,” filed on Mar. 16, 2020, the disclosure of which is hereby incorporated by reference in its entirety.
This application generally relates to interactive virtual reality systems, and in particular, to the development of variable virtual reality content or experiences in virtual reality simulations.
Virtual reality systems may offer a unique form of entertainment and training. A virtual reality environment may refer to an immersive virtual world/space experience that users may interact with. Increased interest in virtual reality as an industry and improvements made in virtual reality technology have expanded virtual reality usage. However, traditional scriptwriting tools are not well-suited to the creation of virtual reality content due to the various complexities associated with the more immersive environment realized with virtual reality content. Producing virtual content involves the careful use of sound, light, and movement to present a cohesive virtual reality experience.
Furthermore, many typical virtual reality environments that are used for real-world training are suited for only a small number of users. For example, multiple characters may perform certain predefined and coordinated animations. Traditional solutions to the problem of which characters and actions to include in particular interaction scenarios and which interaction scenarios are appropriate for the presently available game characters and context are clumsy and inefficient.
There is thus a need to provide virtual reality environments that are suitable for a larger number of users.
The present invention provides a method and system for creating dynamic scenarios. According to one embodiment, the system comprises a processor and a memory having executable instructions stored thereon that when executed by the processor cause the processor to execute virtual reality training software including a scenario editor that is selected by a user to create a dynamic scenario, generate the dynamic scenario by creating a folder including customizable files corresponding to a scene of the dynamic scenario, present a window including a project view of the customizable files that provides access to properties of a scenario information file including configurable scenario settings, and generate animator controllers including state machines that are linked with animated characters or items wherein the animator controllers controlling animation behaviors of the animated characters or items in the dynamic scenario.
The animator controllers may include configurable machine states corresponding to the animated characters or items. The animator controllers may be configured to control flow between the configurable states based on transitions and parameters. The configurable machine states may include actions and idles of the animated characters or items. In one embodiment, the processor may be further configured to use transitions to control flow between machine states corresponding to the animated characters or items via the animator controllers. In another embodiment, the processor may be further configured to graphically rendering the transitions with line arrows between an origin state and a destination state.
The virtual reality computing system may further comprise the processor configured to generate an animator override controller object, the animator override controller object synchronizing animated characters or items from a plurality of the animator controllers. The processor may also be configured to generate animator audio sync scripts wherein the animator audio sync scripts adding sound to the dynamic scenario. The animator audio sync scripts may associate audio sources with animation data objects of the animated characters or items. The animator audio sync scripts may associate audio sources with the animator controllers.
According to one embodiment, the method comprises executing virtual reality training software including a scenario editor that is selected by a user to create a dynamic scenario, generating the dynamic scenario by creating a folder including customizable files corresponding to a scene of the dynamic scenario, creating a window including a project view of the customizable files that provides access to properties of a scenario information file including configurable scenario settings, and generating animator controllers including state machines that are linked with animated characters or items wherein the animator controllers controlling animation behaviors of the animated characters or items in the dynamic scenario.
The animator controllers may include configurable machine states corresponding to the animated characters or items. Flow between machine states corresponding to the animated characters or items may be controlled with transitions via the animator controllers. The method may further comprise graphically rendering the transitions with line arrows between an origin state and a destination state. An animator override controller object may be generated wherein the animator override controller object synchronizing animated characters or items from a plurality of the animator controllers. Animator audio sync scripts may also be generated wherein the animator audio sync scripts adding sound to the dynamic scenario. The animator audio sync scripts may associate audio sources with animation data objects of the animated characters or items. The animator audio sync scripts may also associate audio sources with the animator controllers.
The invention is illustrated in the figures of the accompanying drawings which are meant to be exemplary and not limiting, in which like references are intended to refer to like or corresponding parts.
Subject matter will now be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, exemplary embodiments in which the invention may be practiced. Subject matter may, however, be embodied in a variety of different forms and, therefore, covered or claimed subject matter is intended to be construed as not being limited to any example embodiments set forth herein; example embodiments are provided merely to be illustrative. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention. Likewise, a reasonably broad scope for claimed or covered subject matter is intended. Throughout the specification and claims, terms may have nuanced meanings suggested or implied in context beyond an explicitly stated meaning. Likewise, the phrase “in one embodiment” as used herein does not necessarily refer to the same embodiment and the phrase “in another embodiment” as used herein does not necessarily refer to a different embodiment. It is intended, for example, that claimed subject matter include combinations of exemplary embodiments in whole or in part. Among other things, for example, subject matter may be embodied as methods, devices, components, or systems. Accordingly, embodiments may, for example, take the form of hardware, software, firmware or any combination thereof (other than software per se). The following detailed description is, therefore, not intended to be taken in a limiting sense.
Systems and methods described herein are directed to simulations in virtual reality environments for providing training, education, or entertainment. The presently disclosed system may identify and select characters, assign roles, select interaction scenarios, and coordinate playing scenarios while aligning the characters. Dynamic scenarios, as disclosed herein, may refer to content creation that allow for additional flexibility in scenario development and re-usage. Content may be created by using a system including animator controllers combined with custom features to handle multiple branches and provide fine-tuned character control.
Users may interact with the virtual reality computer system through the manipulation of handheld devices. Handheld devices may be coupled to tracker 106. Tracker 106 includes pin connector 122, power source 124, sensors 126, wireless transmitter 128, and microcontroller 130. Sensors 126 can monitor the handheld devices and transmit data to virtual reality computing device 110 to interpret the motions of the user and dynamically change a virtual reality environment based on the motions of the user.
The handheld devices may include a pin pad that can be communicatively or electrically connected to pin connector 122. Power source 124 may be connected to microcontroller 130 and used by microcontroller 130 to provide a voltage source to the handheld devices via pin connector 122. As such, microcontroller 130 may receive signals from closed electrical circuits connected to pin connector 122 and transmit the signals to virtual reality computing device 110 via wireless transmitter 128. Virtual reality computing device 110 may process the signals using processor(s) 132 and transmit corresponding images to headset unit 108 from wireless interface 134.
Microcontroller 130 may also provide power to sensors 126 and wireless transmitter 108 from power source 124. Sensors 126 can detect a position of tracker 106 within the x, y and z coordinates of a space, as well as orientation including yaw, pitch and roll. From a user's perspective, a handheld device connected to tracker 106 may be tracked when pointed up, down, left and right, tilted at an angle, or moved forward or backwards. Sensors 126 may communicate where the handheld device is oriented to microcontroller 130 which sends the data to virtual reality computing device 110 for processing by processor(s) 132 and renders corresponding images for transmission by wireless interface 134 to headset unit 108.
Headset unit 108 may comprise a head mounted display that a user can place over their eyes. The headset unit 108 may be configured to communication with the virtual reality computing device 110 to provide display according to a virtual reality simulation program. Additionally, the headset unit 108 may be configured with positioning and/or motion sensors to provide user motion inputs to virtual reality computing device 110. When wearing the headset unit 108, the view may shift as the user looks up, down, left and right. The view may also change if the user tilts their head at an angle or move their head forward or backward without changing the angle of gaze. Sensors on headset unit 108 may communicate to processor(s) 132 where the user is looking, and the processor(s) 132 may render corresponding images to the head mounted display. Sensors, as disclosed herein, can detect signals of any form, including electromagnetic signals, acoustic signals, optical signals and mechanical signals.
Virtual reality computing device 110 includes processor(s) 132, wireless interface 134, memory 136, and computer readable media storage 138. Processor(s) 132 may be configured to execute virtual reality training software stored within memory 136 and/or computer readable media storage 138, to communicate data to and from memory 136, and to control operations of the virtual reality computing device 110. The processor(s) 132 may comprise central processing units, auxiliary processors among several processors, and graphics processing units. Memory 136 may include any one or combination of volatile memory elements (e.g., random access memory (RAM). Computer readable media storage 138 may comprise non-volatile memory elements (e.g., read-only memory (ROM), hard drive, etc.). Wireless interface 134 may comprise a network device operable to connect to a wireless computer network for facilitating communications and data transfer with tracker 106 and headset unit 108.
The virtual reality training software may comprise an audio/visual interactive interface that enables a trainee to interact with a three-dimensional first-person-view environment in training scenarios with tracker devices or handheld devices connected to the tracker devices, such as weapons including virtual reality-enabled magazine assemblies as described in commonly owned U.S. Patent Application No. 62/874,234, entitled “MAGAZINE SIMULATOR FOR USAGE WITH WEAPONS IN A VIRTUAL REALITY SYSTEM” which is herein incorporated by reference in its entirety. Virtual reality computing device 110 may receive signals or commands from tracker 106 and headset unit 108 to generate corresponding data (including audio and video data) for depiction in the virtual reality environment.
Virtual reality training software may include a scenario library, as illustrated in
As shown in
A scenario information file 412 may be accessed from the window 400 to configure scenario settings. For example, the scenario information file 412 contained in the “main” folder 410 may be selected in the project view 414 to show its properties in an “inspector view” 500, as illustrated in
Animator controllers may be added to the dynamic scenario files. Animator controllers may comprise state machines that can be linked or associated with animated characters or items to control their animation behaviors in the scenario. During a scenario editor creation process, the dynamic scenario creator wizard may create an animator controller from an existing template. Animator controllers may be viewed and selected from the “animation” folder 404.
To create a new state, a user may either A) right click anywhere in the animator controller view and select “Create state>Empty”; or B) drag a desired animation data object for a core state into the animator window. Depending on how it was created, the state may be named either “New State” or the same as the animation data object. The animator controller may be configured to control the flow between machine states using “transitions” which may be defined by “parameters.” Transitions may be created for each state by, for example, right clicking on an origin intro state 604 and selecting “Make Transition” 702, as illustrated in
A transition may be selected to bring up the inspector view (
To guide the animator controller through the target states, parameters may be created to distinguish between the transitions. To create a parameter, a “Parameters” tab 902 in the top left of the animator window, as shown in
For example, referring to
To keep animations of all objects and characters in sync throughout a scenario, the animations may all utilize a same core state machine. According to one embodiment, animator override controller objects may be created. An animator override controller may be created for each object to have unique animations. Referring to
Adding sound to dynamic scenarios may be done utilizing animator audio sync scripts. A script may be needed for each controlled audio source. The scripts can be located anywhere (e.g., at the same level as the audio source itself) but for ease of access and organization it may be advised to keep them at a unified location per character or object. In
Referring to
When an animator controller 1804 is linked to the animator object sync script 1800, as indicated by 1806, states may be loaded based on the animator controller 1804 to configure tracked states. In the embodiment illustrated by
Alternatively, two “Animator Object Sync” scripts may be used, one for each item. One sync script may have the 3 enable or disable states for the “CupInHand,” and the other would have the 3 states for “CupOnDesk.” Depending on if items are synchronized or if they are parented to related objects, it may make more sense to track their activations separately or together. Due to these objects representing the same item in the user's reality (they perceive only one cup), this is an example of using one animator object sync script for the two objects.
Referring to
It should be understood that various aspects of the embodiments of the present invention could be implemented in hardware, firmware, software, or combinations thereof. In such embodiments, the various components and/or steps would be implemented in hardware, firmware, and/or software to perform the functions of the present invention. That is, the same piece of hardware, firmware, or module of software could perform one or more of the illustrated blocks (e.g., components or steps). In software implementations, computer software (e.g., programs or other instructions) and/or data is stored on a machine-readable medium as part of a computer program product and is loaded into a computer system or other device or machine via a removable storage drive, hard drive, or communications interface. Computer programs (also called computer control logic or computer-readable program code) are stored in a main and/or secondary memory, and executed by one or more processors (controllers, or the like) to cause the one or more processors to perform the functions of the invention as described herein. In this document, the terms “machine readable medium,” “computer-readable medium,” “computer program medium,” and “computer usable medium” are used to generally refer to media such as a random access memory (RAM); a read only memory (ROM); a removable storage unit (e.g., a magnetic or optical disc, flash memory device, or the like); a hard disk; or the like.
The foregoing description of the specific embodiments will so fully reveal the general nature of the invention that others can, by applying knowledge within the skill of the relevant art(s) (including the contents of the documents cited and incorporated by reference herein), readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present invention. Such adaptations and modifications are therefore intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance presented herein, in combination with the knowledge of one skilled in the relevant art(s).
Number | Date | Country | |
---|---|---|---|
62990097 | Mar 2020 | US |