Many computing applications such as computer games, multimedia applications, or the like use controls to allow users to manipulate game characters or other aspects of an application. Typically such controls are input using, for example, controllers, remotes, keyboards, mice, or the like. Unfortunately, such controls can be difficult to learn, thus creating a barrier between a user and such games and applications. Furthermore, such controls may be different than actual game actions or other application actions for which the controls are used. For example, a game control that causes a game character to swing a baseball bat may not correspond to an actual motion of swinging the baseball bat.
Disclosed herein are systems and methods for tracking motion of a user or other objects. The tracked motion is then used to update an application. For example, a user can manipulate avatars or other aspects of the application by using movement of the user's body and/or objects around the user, rather than (or in addition to) using controllers, remotes, keyboards, mice, or the like. Technology is provided that can amplify the user's motion in the virtual world to create a more compelling experience. For example, a small jump by a user can translate to a very high jump by an avatar in a virtual world game.
One embodiment includes using a camera to sense motion of a user. In response to sensing the motion of the user, the system creates and displays an animation of an object performing the motion of the user in a manner that is amplified in comparison to the motion of the user. The system creates and outputs audio/visual feedback in proportion to a magnitude of the sensed motion of the user.
One embodiment includes a camera that can sense motion of a user and a computer connected to the camera to receive data from the camera. The data indicates the motion of the user. The computer determines an action corresponding to the sensed motion of the user and determines a magnitude of the sensed motion of the user. The computer creates and displays an animation of an avatar in a video game performing the action in a manner that is amplified in comparison to the sensed motion. The action is amplified by a factor that is proportional to the determined magnitude of the sensed motion of the user.
One embodiment includes one or more processor readable storage devices having processor readable code embodied on the one or more processor readable storage devices. The processor readable code programs one or more processors to perform a method that comprises receiving data from a camera that indicates motion of a user, determining an action corresponding to the motion of the user indicated by the received data (including determining the start of the action by the user and determining the end of the action by the user), and creating and displaying an animation of an object in an application performing the action in a manner that is amplified in comparison to the sensed motion of the user such that the object starts the action at the start of the action by the user and the object ends the action at the end of the action by the user.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
A video game system (or other data processing system) tracks users and objects using depth images and/or visual images. The tracking is then used to update an application (e.g., a video game). Therefore, a user can manipulate game characters or other aspects of the application by using movement of the user's body and/or objects around the user, rather than (or in addition to) using controllers, remotes, keyboards, mice, or the like. For example, a user's motions can be used to drive the movement of an avatar in a virtual world. The avatar will perform the same (or similar) actions as the user.
In some situations, the avatar will perform the action that the user is performing; however, the avatar will perform that action in a manner that is amplified in comparison to the motion of the user. For example, an avatar will jump significantly higher than the user jumps, duck much lower than the user ducks, throw much harder than the user throws, etc. The amplification can be by a factor that is proportional to the determined magnitude of the user. For example, the faster that the user jumps, the higher that the avatar will jump. The video game system will also create and output audio/visual feedback in proportion to a magnitude of the motion of the user.
Although the examples below include a video game system, the technology described herein also applies to other types of data processing systems and/or other types of applications.
As shown in
As shown in
According to one embodiment, the tracking system 10 may be connected to an audiovisual device 16 such as a television, a monitor, a high-definition television (HDTV), or the like that may provide game or application visuals and/or audio to a user such as the user 18. For example, the computing system 12 may include a video adapter such as a graphics card and/or an audio adapter such as a sound card that may provide audiovisual signals associated with the game application, non-game application, or the like. The audiovisual device 16 may receive the audiovisual signals from the computing system 12 and may then output the game or application visuals and/or audio associated with the audiovisual signals to the user 18. According to one embodiment, the audiovisual device 16 may be connected to the computing system 12 via, for example, an S-Video cable, a coaxial cable, an HDMI cable, a DVI cable, a VGA cable, component video cable, or the like.
As shown in
In the example depicted in
Other movements by the user 18 may also be interpreted as other controls or actions and/or used to animate the user avatar, such as controls to bob, weave, shuffle, block, jab, or throw a variety of different power punches. Furthermore, some movements may be interpreted as controls that may correspond to actions other than controlling the user avatar 24. For example, in one embodiment, the user may use movements to end, pause, or save a game, select a level, view high scores, communicate with a friend, etc. According to another embodiment, the user may use movements to select the game or other application from a main user interface. Thus, in example embodiments, a full range of motion of the user 18 may be available, used, and analyzed in any suitable manner to interact with an application.
In example embodiments, the human target such as the user 18 may have an object. In such embodiments, the user of an electronic game may be holding the object such that the motions of the user and the object may be used to adjust and/or control parameters of the game. For example, the motion of a user holding a racket may be tracked and utilized for controlling an on-screen racket in an electronic sports game. In another example embodiment, the motion of a user holding an object may be tracked and utilized for controlling an on-screen weapon in an electronic combat game. Objects not held by the user can also be tracked, such as objects thrown, pushed or rolled by the user (or a different user) as well as self propelled objects. In addition to boxing, other games can also be implemented.
According to other example embodiments, the tracking system 10 may further be used to interpret target movements as operating system and/or application controls that are outside the realm of games. For example, virtually any controllable aspect of an operating system and/or application may be controlled by movements of the target such as the user 18.
As shown in
As shown in
According to another example embodiment, time-of-flight analysis may be used to indirectly determine a physical distance from the capture device 20 to a particular location on the targets or objects by analyzing the intensity of the reflected beam of light over time via various techniques including, for example, shuttered light pulse imaging.
In another example embodiment, the capture device 20 may use a structured light to capture depth information. In such an analysis, patterned light (i.e., light displayed as a known pattern such as grid pattern, a stripe pattern, or different pattern) may be projected onto the scene via, for example, the IR light component 24. Upon striking the surface of one or more targets or objects in the scene, the pattern may become deformed in response. Such a deformation of the pattern may be captured by, for example, the 3-D camera 26 and/or the RGB camera 28 (and/or other sensor) and may then be analyzed to determine a physical distance from the capture device to a particular location on the targets or objects. In some implementations, the IR Light component 25 is displaced from the cameras 25 and 26 so triangulation can be used to determined distance from cameras 25 and 26. In some implementations, the capture device 20 will include a dedicated IR sensor to sense the IR light, or a sensor with an IR filter.
According to another embodiment, the capture device 20 may include two or more physically separated cameras that may view a scene from different angles to obtain visual stereo data that may be resolved to generate depth information. Other types of depth image sensors can also be used to create a depth image.
The capture device 20 may further include a microphone 30. The microphone 30 may include a transducer or sensor that may receive and convert sound into an electrical signal. According to one embodiment, the microphone 30 may be used to reduce feedback between the capture device 20 and the computing system 12 in the target recognition, analysis, and tracking system 10. Additionally, the microphone 30 may be used to receive audio signals that may also be provided by to computing system 12.
In an example embodiment, the capture device 20 may further include a processor 32 that may be in communication with the image camera component 22. The processor 32 may include a standardized processor, a specialized processor, a microprocessor, or the like that may execute instructions including, for example, instructions for receiving a depth image, generating the appropriate data format (e.g., frame) and transmitting the data to computing system 12.
The capture device 20 may further include a memory component 34 that may store the instructions that are executed by processor 32, images or frames of images captured by the 3-D camera and/or RGB camera, or any other suitable information, images, or the like. According to an example embodiment, the memory component 34 may include random access memory (RAM), read only memory (ROM), cache, flash memory, a hard disk, or any other suitable storage component. As shown in
As shown in
Computing system 12 includes depth image processing and skeletal tracking module 50, which uses the depth images to track one or more persons detectable by the depth camera. Depth image processing and skeletal tracking module 50 provides the tracking information to application 196, which can be a video game, productivity application, communications application or other software application etc. The audio data and visual image data is also provided to application 52 and depth image processing and skeletal tracking module 50. Application 52 provides the tracking information, audio data and visual image data to recognizer engine 54. In another embodiment, recognizer engine 54 receives the tracking information directly from depth image processing and skeletal tracking module 50 and receives the audio data and visual image data directly from capture device 20.
Recognizer engine 54 is associated with a collection of filters 60, 62, 64, . . . , 66 each comprising information concerning a gesture, action or condition that may be performed by any person or object detectable by capture device 20. For example, the data from capture device 20 may be processed by filters 60, 62, 64, . . . , 66 to identify when a user or group of users has performed one or more gestures or other actions. Those gestures may be associated with various controls, objects or conditions of application 52. Thus, the computing environment 12 may use the recognizer engine 54, with the filters, to interpret movements.
Capture device 20 of
The system will use the RGB images and depth images to track a user's movements. For example, the system will track a skeleton of a person using the depth images. There are many methods that can be used to track the skeleton of a person using depth images. One suitable example of tracking a skeleton using depth image is provided in U.S. patent application Ser. No. 12/603,437, “Pose Tracking Pipeline” filed on Oct. 21, 2009, Craig, et al. (hereinafter referred to as the '437 Application), incorporated herein by reference in its entirety. The process of the '437 Application includes acquiring a depth image, down sampling the data, removing and/or smoothing high variance noisy data, identifying and removing the background, and assigning each of the foreground pixels to different parts of the body. Based on those steps, the system will fit a model to the data and create a skeleton. The skeleton will include a set of joints and connections between the joints.
Recognizer engine 54 (of computing system 12 depicted in
Filters may be modular or interchangeable. In one embodiment, a filter has a number of inputs (each of those inputs having a type) and a number of outputs (each of those outputs having a type). A first filter may be replaced with a second filter that has the same number and types of inputs and outputs as the first filter without altering any other aspect of the recognizer engine architecture. For instance, there may be a first filter for driving that takes as input skeletal data and outputs a confidence that the gesture associated with the filter is occurring and an angle of steering. Where one wishes to substitute this first driving filter with a second driving filter—perhaps because the second driving filter is more efficient and requires fewer processing resources—one may do so by simply replacing the first filter with the second filter so long as the second filter has those same inputs and outputs—one input of skeletal data type, and two outputs of confidence type and angle type.
A filter need not have a parameter. For instance, a “user height” filter that returns the user's height may not allow for any parameters that may be tuned. An alternate “user height” filter may have tunable parameters—such as to whether to account for a user's footwear, hairstyle, headwear and posture in determining the user's height.
Inputs to a filter may comprise things such as joint data about a user's joint position, angles formed by the bones that meet at the joint, RGB color data from the scene, and the rate of change of an aspect of the user. Outputs from a filter may comprise things such as the confidence that a given gesture is being made, the speed at which a gesture motion is made, and a time at which a gesture motion is made.
The recognizer engine 54 may have a base recognizer engine that provides functionality to the filters. In one embodiment, the functionality that the recognizer engine 54 implements includes an input-over-time archive that tracks recognized gestures and other input, a Hidden Markov Model implementation (where the modeled system is assumed to be a Markov process—one where a present state encapsulates any past state information necessary to determine a future state, so no other past state information must be maintained for this purpose—with unknown parameters, and hidden parameters are determined from the observable data), as well as other functionality required to solve particular instances of gesture recognition.
Filters 60, 62, 64, . . . , 66 are loaded and implemented on top of the recognizer engine 54 and can utilize services provided by recognizer engine 54 to all filters 60, 62, 64, . . . , 66. In one embodiment, recognizer engine 54 receives data to determine whether it meets the requirements of any filter 60, 62, 64, . . . , 66. Since these provided services, such as parsing the input, are provided once by recognizer engine 54 rather than by each filter 60, 62, 64, . . . , 66, such a service need only be processed once in a period of time as opposed to once per filter for that period, so the processing required to determine gestures is reduced.
Application 52 may use the filters 60, 62, 64, . . . , 66 provided with the recognizer engine 54, or it may provide its own filter, which plugs in to recognizer engine 54. In one embodiment, all filters have a common interface to enable this plug-in characteristic. Further, all filters may utilize parameters, so a single gesture tool below may be used to debug and tune the entire filter system.
More information about recognizer engine 54 can be found in U.S. patent application Ser. No. 12/422,661, “Gesture Recognizer System Architecture,” filed on Apr. 13, 2009, incorporated herein by reference in its entirety. More information about recognizing gestures can be found in U.S. patent application Ser. No. 12/391,150, “Standard Gestures,” filed on Feb. 23, 2009; and U.S. patent application Ser. No. 12/474,655, “Gesture Tool” filed on May 29, 2009. both of which are incorporated herein by reference in their entirety.
A graphics processing unit (GPU) 108 and a video encoder/video codec (coder/decoder) 114 form a video processing pipeline for high speed and high resolution graphics processing. Data is carried from the graphics processing unit 108 to the video encoder/video codec 114 via a bus. The video processing pipeline outputs data to an A/V (audio/video) port 140 for transmission to a television or other display. A memory controller 110 is connected to the GPU 108 to facilitate processor access to various types of memory 112, such as, but not limited to, a RAM (Random Access Memory).
The multimedia console 100 includes an I/O controller 120, a system management controller 122, an audio processing unit 123, a network interface controller 124, a first USB host controller 126, a second USB controller 128 and a front panel I/O subassembly 130 that are preferably implemented on a module 118. The USB controllers 126 and 128 serve as hosts for peripheral controllers 142(1)-142(2), a wireless adapter 148, and an external memory device 146 (e.g., flash memory, external CD/DVD ROM drive, removable media, etc.). The network interface 124 and/or wireless adapter 148 provide access to a network (e.g., the Internet, home network, etc.) and may be any of a wide variety of various wired or wireless adapter components including an Ethernet card, a modem, a Bluetooth module, a cable modem, and the like.
System memory 143 is provided to store application data that is loaded during the boot process. A media drive 144 is provided and may comprise a DVD/CD drive, Blu-Ray drive, hard disk drive, or other removable media drive, etc. The media drive 144 may be internal or external to the multimedia console 100. Application data may be accessed via the media drive 144 for execution, playback, etc. by the multimedia console 100. The media drive 144 is connected to the I/O controller 120 via a bus, such as a Serial ATA bus or other high speed connection (e.g., IEEE 1394).
The system management controller 122 provides a variety of service functions related to assuring availability of the multimedia console 100. The audio processing unit 123 and an audio codec 132 form a corresponding audio processing pipeline with high fidelity and stereo processing. Audio data is carried between the audio processing unit 123 and the audio codec 132 via a communication link. The audio processing pipeline outputs data to the A/V port 140 for reproduction by an external audio user or device having audio capabilities.
The front panel I/O subassembly 130 supports the functionality of the power button 150 and the eject button 152, as well as any LEDs (light emitting diodes) or other indicators exposed on the outer surface of the multimedia console 100. A system power supply module 136 provides power to the components of the multimedia console 100. A fan 138 cools the circuitry within the multimedia console 100.
The CPU 101, GPU 108, memory controller 110, and various other components within the multimedia console 100 are interconnected via one or more buses, including serial and parallel buses, a memory bus, a peripheral bus, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures can include a Peripheral Component Interconnects (PCI) bus, PCI-Express bus, etc.
When the multimedia console 100 is powered on, application data may be loaded from the system memory 143 into memory 112 and/or caches 102, 104 and executed on the CPU 101. The application may present a graphical user interface that provides a consistent user experience when navigating to different media types available on the multimedia console 100. In operation, applications and/or other media contained within the media drive 144 may be launched or played from the media drive 144 to provide additional functionalities to the multimedia console 100.
The multimedia console 100 may be operated as a standalone system by simply connecting the system to a television or other display. In this standalone mode, the multimedia console 100 allows one or more users to interact with the system, watch movies, or listen to music. However, with the integration of broadband connectivity made available through the network interface 124 or the wireless adapter 148, the multimedia console 100 may further be operated as a participant in a larger network community.
When the multimedia console 100 is powered ON, a set amount of hardware resources are reserved for system use by the multimedia console operating system. These resources may include a reservation of memory (e.g., 16 MB), CPU and GPU cycles (e.g., 5%), networking bandwidth (e.g., 8 kbs), etc. Because these resources are reserved at system boot time, the reserved resources do not exist from the application's view.
In particular, the memory reservation preferably is large enough to contain the launch kernel, concurrent system applications and drivers. The CPU reservation is preferably constant such that if the reserved CPU usage is not used by the system applications, an idle thread will consume any unused cycles.
With regard to the GPU reservation, lightweight messages generated by the system applications (e.g., pop ups) are displayed by using a GPU interrupt to schedule code to render popup into an overlay. The amount of memory required for an overlay depends on the overlay area size and the overlay preferably scales with screen resolution. Where a full user interface is used by the concurrent system application, it is preferable to use a resolution independent of application resolution. A scaler may be used to set this resolution such that the need to change frequency and cause a TV resynch is eliminated.
After the multimedia console 100 boots and system resources are reserved, concurrent system applications execute to provide system functionalities. The system functionalities are encapsulated in a set of system applications that execute within the reserved system resources described above. The operating system kernel identifies threads that are system application threads versus gaming application threads. The system applications are preferably scheduled to run on the CPU 101 at predetermined times and intervals in order to provide a consistent system resource view to the application. The scheduling is to minimize cache disruption for the gaming application running on the console.
When a concurrent system application requires audio, audio processing is scheduled asynchronously to the gaming application due to time sensitivity. A multimedia console application manager (described below) controls the gaming application audio level (e.g., mute, attenuate) when system applications are active.
Input devices (e.g., controllers 142(1) and 142(2)) are shared by gaming applications and system applications. The input devices are not reserved resources, but are to be switched between system applications and the gaming application such that each will have a focus of the device. The application manager preferably controls the switching of input stream, without knowledge the gaming application's knowledge and a driver maintains state information regarding focus switches. The cameras 26, 28 and capture device 20 may define additional input devices for the console 100 via USB controller 126 or other interface.
Computing system 220 comprises a computer 241, which typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 241 and includes both volatile and nonvolatile media, removable and non-removable media. The system memory 222 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 223 and random access memory (RAM) 260. A basic input/output system 224 (BIOS), containing the basic routines that help to transfer information between elements within computer 241, such as during start-up, is typically stored in ROM 223. RAM 260 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 259. By way of example, and not limitation,
The computer 241 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
The computer 241 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 246. The remote computer 246 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 241, although only a memory storage device 247 has been illustrated in
When used in a LAN networking environment, the computer 241 is connected to the LAN 245 through a network interface or adapter 237. When used in a WAN networking environment, the computer 241 typically includes a modem 250 or other means for establishing communications over the WAN 249, such as the Internet. The modem 250, which may be internal or external, may be connected to the system bus 221 via the user input interface 236, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 241, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
Either of the systems of
The above-described video game systems will track users and objects using depth images and/or visual images. The tracking is then used to update the application (e.g., a video game). Therefore, a user can manipulate game characters or other aspects of the application by using movement of the user's body and/or objects around the user. For example, a user's motions can be used to drive the movement of an avatar in a virtual world. The avatar will perform the same (or similar) actions as the user. In some situations, the avatar will perform the action that the user is performing; however, the avatar will perform that action in a manner that is amplified in comparison to the motion of the user.
In step 306, the system determines whether the intended action is an action that can be amplified. The system will be configured so that some actions can be amplified and other actions cannot be amplified. In one embodiment, only jumps and ducks are amplified. In another embodiments, only jumps, ducks, arm swings, punches, and throws are amplified. In other embodiments, other sets of actions can be amplified.
If the intended action is not an action that can be amplified, the system will interact with the user without amplifying any action in step 314. If the intended action is an action that can be amplified, then in step 308 it is determined whether the context of the application is suitable for amplification. For example, the system will determine, based on the context of the video game being played, whether it is appropriate to amplify the action. For example, if an avatar in a video game is in a cave or room with a very low ceiling and the user performs a jump, it would not be appropriate to amplify the jump. If the context of the application is not suitable for amplification, then the application will interact with the user (step 314) without amplification of the user's actions. However, if the context is suitable for amplification of the user's action, then in step 310 the system will create an animation depicting the avatar performing the same movement as the user; however, the avatar's movement will be amplified in comparison to the user, all in response to sensing the motion of the user. In one embodiment, the amount of amplification of the user's actions will be by a factor that is proportional to the magnitude of the user's motions, which will be described below. Additionally, in one embodiment, the animation will be created to synchronize with the user's movements so that the avatar will start and stop the animation at the same time that the user starts and stops the user's movements. Additionally, in step 312, the system will provide audio/visual feedback to the user in proportion to the magnitude of the sensed motion of the user. For purposes of this document, “audio/visual” includes audio only, visual only, or a combination of audio and visual. In step 314, the system will continue to interact with the user. The process of
Although
In step 504, the system will access scaling parameters. For example, the system will employ a number to be used as a multiplier to create the amplification of movement for the avatar corresponding to the user's movement. In one embodiment, the multiplier can be a integer. In other embodiments, more complex mathematical functions can be used to identify the appropriate multiplier. The multiplier can be based on the magnitude of movement of the user, context of the application and/or other environmental conditions. In one embodiment, the system will store a set of multipliers or a mathematical equation/relationship to be evaluated. The set of set of multipliers or mathematical equation/relationship are accessed in step 504. Step 506 includes determining the magnitude of amplification from the set of multipliers or mathematical equation/relationship accessed in step 504.
In step 508, the system will determine the movement data for the avatar based on the magnitude of amplification determined in step 506 and the user's movement which was sensed in step 302 of
In step 512, the system will provide audio/visual feedback in proportion to the magnitude of movement. In one embodiment, the system will make sounds with the pitch or tone of the sound varying based on the determined magnitude/factor of amplification in step 506, which is itself based on the magnitude of movement of the user. In other embodiments, the system can provide visual feedback at the beginning, during or end of the action. For example, if the user jumps and the avatar makes a higher jump, when the avatar lands the ground can shake in proportion to the magnitude/factor of amplification. Alternatively, the avatar's hair can blow in the wind, where the wind has speed based on the magnitude/factor of amplification. Other examples of audio/visual feedback include a cloud at the top of a jump, ducks flying at takeoff of the jump, a thud at landing, footprints where the jumper took off, etc. Any of these visual feedbacks can be varied based on the magnitude/factor of amplification. For example, change the amount of dust flying, change the size of the crowd, change the volume/pitch/tone of the thud at landing, and/or change the size of the footprints.
Although
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. It is intended that the scope of the invention be defined by the claims appended hereto.