In a typical computing environment, a user has an input device such as a keyboard, a mouse, a joystick or the like, which may be connected to the computing environment by a cable, wire, wireless connection, or some other means of connection. If control of the computing environment were to be shifted from a connected controller to gesture based control, particularly as in a natural user interface (NUI), the user no longer has a connected device to inform the computing environment of a control instruction for the application with great consistency.
For example, when a computing environment has a set input such as a controller or keyboard, a user can determine that he has a controller connected to a port, that he is pressing keys or buttons and that the system is responding. When control over the computing environment is shifted to gestures of a user, detecting gestures can be inhibited or produce sub-optimal response from the application due to visual or audio characteristics of the capture area or the user's body movements unlike with a controller. The inability to properly detect gestures can frustrate the user in interacting with an executing application. For example, his participation in a game being executed by the application may be frustrated.
Technology is presented for providing feedback to a user on an ability of an executing application to track user action for control of the executing application on a computer system. A capture system detects a user in a capture area. Responsive to a user tracking criteria not being satisfied, feedback is output to the user. In some examples, the feedback can be an audio indicator. In other examples, visual indicators are provided as feedback to recommend an action for the user to take to satisfy the tracking criteria. In some embodiments, the feedback is provided within the context of an executing application. In one embodiment, technology is presented for assisting a user in selecting a capture area. Additionally, selection of a feedback response can be determined according to criteria in some embodiments.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Technology is presented for providing feedback to a user on an ability of an executing application to track user action for control of the executing application on a computer system. One example of a distinguishability factor which can affect the ability to track a user is when a body part of the user which controls a display object is at least partially out of a field of view of an image capture system. Other factors include ambient factors such as lighting effects, certain types of obstructions and audio factors such as loudness and distinguishability of speech (e.g. syllables or words).
In some embodiments, the technology presents feedback responses that are explicit suggestions to a user. In other embodiments, the feedback is subtle or implicit by being provided within the context of an executing application. For example, when a user is too close to a border of a field of view of a capture system, an object such as a scary monster within a scene comes on the display side near that border. A user is motivated to move away from the field of view border towards the center of the field of view to escape the monster.
Examples of factors upon which selection of a feedback response can be determined are discussed further below.
According to the example embodiment, the target may be a human target (e.g. user 18), a human target with an object, two or more human targets, or the like that may be scanned to generate a model such as a skeletal model, a mesh human model, or any other suitable representation thereof. The model may be tracked such that physical movements or motions of the target may act as a real-time user interface that adjusts and/or controls parameters of an application. Furthermore, the model can be presented to applications as a model and delivered to them in real-time. For example, the tracked motions of a user may be used to move an on-screen character or avatar in an electronic role-playing game.
In one example in which the model is a multi-point skeletal model, target recognition, analysis, and tracking system 10 efficiently tracks humans and their natural movements based on models of the natural mechanics and capabilities of the human muscular-skeletal system. The example system 10 also uniquely recognizes individuals in order to allow multiple people to interact with the system via natural movements of their limbs and body.
Movements of a user can be tracked to an avatar which can be a computer-generated image which represents a user who is typically a human. The avatar can depict an image of the user that is highly representative of what the user actually looks like or it may be a character (e.g. human, fanciful, animal, animated object) with varying degrees of resemblance to the user or none at all.
Specifically,
The audiovisual display system 16 can be an advanced display system such as a high-definition television (HDTV). In other embodiments, the display may be a lower resolution display, some examples of which include a television, a computer monitor, or mobile device display. The audiovisual system 16 may receive the audiovisual signals from the computing system 12 over a communication interface (e.g. an S-Video cable, a coaxial cable, an HDMI cable, a DVI cable, a VGA cable) and may then output the game or application visuals and/or audio associated with the audiovisual signals to the user 18.
A gesture comprises a motion or pose that acts as user input to control an executing application. Through moving his body, a user may create gestures. For example, a user may be captured in image data. An identified gesture of the user can be parsed for meaning as a control for an application or action to be performed. For example, the user 18 throws a jab in the boxing game of
For example, the target recognition, analysis, and tracking system 10 may be used to recognize and analyze a punch of the user 18 in the capture area 30 such that the punch may be interpreted as a gesture, in this case a game control of a punch for his player avatar 24 to perform in game space. Other gestures by the user 18 may also be interpreted as other controls or actions, such as controls to bob, weave, shuffle, block, jab, or throw a variety of different power punches. By tracking the punches and jabs of user 18, the boxing game software application determines his avatar's 24 score and which avatar (22 or 24) will win the match. Different applications will recognize and track different gestures. For example, a pitch by a user in a baseball game is tracked in order to determine whether it is a strike or a ball.
According to other example embodiments, the gesture based system 10 may further be used to interpret target movements as operating system and/or application controls that are outside the realm of games. For example, virtually any controllable aspect of an operating system and/or application may be controlled by movements of the target such as the user 18.
The camera system 20 captures image data of the user in a capture area 30 within the field of view of the camera system 20. In this example, the capture area 30 in the field of view of the camera system 20 is a trapezoid 30, which from a user's perspective, has a shorter line 30f as the front of the capture area, a back line 30b (e.g. a wall can form this line) and a left side 30l and a right side 30r. The field of view can have different geometries. For example, the boundaries and obstructions of a capture area can effect its geometry. For example, if the users were playing in a gymnasium, a back wall may be much further back so the field of view is more cone shaped than trapezoidal. In other instances, a lens type of the camera can effect the field of view as well.
In
One or more off-screen display elements can also be used to provide user tracking feedback. In the illustrated example, a display light 32 such as a light emitting diode (LED) can be satisfied with a particular user. For example, different colors can be used to show tracking quality. Yellow can be a warning; green can be satisfactory; red can indicate a problem. In another example, different lighting patterns can indicate tracking quality. For example, each light can be associated with a boundary on the field of view. If a light element goes a certain color, the color may comprise is feedback that a user is too close to the boundary.
The context of an application comprises the activity which is the purpose of the application. For example, in a menu user interface application, opening or closing a file would be contextual to the application. Avatars and scene objects moving according to the action of a game are contextual to the game. Some examples of actions that are contextual in a gaming application are throwing a punch, the arrival of a new enemy or monster as an obstacle, where a ball is thrown or caught, a change in the scenery as an avatar or user's view moves through a virtual environment, or a change of direction or perspective of a user's view of the game action.
Coffee table 15 is an example of an obstruction that can block a user's body part. The boxing game can have difficulty detecting the “shuffle” gesture in boxing due to the user's 18 legs being partially obscured by the coffee table.
It should be recognized that
Still another example is shown in
Software providing user tracking feedback can provide training for the user on the display to assist the user in getting a sense of the boundaries of the capture area are and what different feedback responses mean. A certain sound can be identified with being centered in the field of view or having good visibility, while another sound indicates the user is getting too close to a boundary or there is an obstruction or other effect or item degrading the tracking quality.
In one example, tracking quality or tracking criteria can be based on how many gestures were not able to be identified in a given time period while user presence and engagement with the target recognition and tracking system has been established. In tracking quality or tracking criteria can be based on detecting presence and engagement, but not being able to recognize a key body part in the image data for the application such as, for example, an arm in a baseball game. Besides visibility factors affecting the tracking criteria or quality, distinguishability factors can also apply audio factors as some applications rely on a body feature such as voice and not just movements of body features which are body parts
The technology may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Likewise, the particular naming and division of modules, routines, features, attributes, methodologies and other aspects are not mandatory, and the mechanisms that implement the technology or its features may have different names, divisions and/or formats. Furthermore, as will be apparent to one of ordinary skill in the relevant art, the modules, routines, features, attributes, methodologies and other aspects of the embodiments disclosed can be implemented as software, hardware, firmware or any combination of the three. Of course, wherever a component, an example of which is a module, is implemented as software, the component can be implemented as a standalone program, as part of a larger program, as a plurality of separate programs, as a statically or dynamically linked library, as a kernel loadable module, as a device driver, and/or in every and any other way known now or in the future to those of ordinary skill in the art of programming. For example, in one embodiment, the tracking feedback software 213 discussed below can be implemented partially in an application programming interface (API) to handle application independent feedback response, and partially in software of a specific application to handle contextual application feedback.
The display view control system 202 comprises a motion module 204 which access a data buffer 223 for incoming image data 205 and, optionally, audio data 217. In the example embodiment shown, the display view control system 202 receives motion tracking data 205 locally from the audiovisual data capture system 20. Additionally, the display view control system 202 can receive motion tracking data 205i remotely over the Internet 203 or other network. With respect to a user, motion tracking data may comprise the image data itself or a downsampled version of that data. Additionally, depth data, and various forms of data derived from image and/or depth data, can be included in motion tracking data, some examples of which are a model for the body of the user, motion data in mathematical primitives which reference the model, or a bitmask image derived for the user for comparison with a previous state of the model. The display view control system 202 analyzes this data to recognize motion of the user and track that motion to objects on the display, for example, to the user's onscreen avatar. An avatar is a type of scene or display object.
The motion module 204 is communicatively coupled to an avatar display control module 209, an object control module 211, tracking feedback software 213, gesture recognition software and data 206, an audiovisual data capture system 20, and the Internet 203. The motion module 204 also has access to datastores 220 stored in memory such as model data 216 for at least one of each of a user 216u, an avatar 216a, and an object on the display 216o or one held by a user. Model data is used as a reference for motion tracking by the motion module 204 of a user in a capture area, or an avatar or object in a scene or display view. In systems where the user's body movements are mapped to the avatar's movements (e.g. based on image capture of the user, or sensors on the user's body), there can be model data representing the user 216u and model data representing the avatar 216a. Where the avatar's physique is quite different, the motion module 204 performs a mapping between the two models. For example, the boy user 18 is shorter and likely does not have the arm reach of his avatar boxer 24. In other words, if skeletal models were used, they may not be the same for the user and the avatar. In some embodiments, however, the application uses the same model data 216 for analyzing the body movements of a user and for directing the motions of the corresponding avatar. In one example, the body model may be implemented as one or more data structures representing body parts and their positions in dimensions and/or rotation angles with respect to a reference. The model data 216 can be updated with updated in terms of absolute positions or with changes in positions and rotations. The changes in positions and rotations may be represented as vectors and angles.
The motion module 204 also has access to profile data 214. In this example, there are profiles for users 214u, avatars 214a and objects 214o. The motion module 204 also has access to display data 218 which includes avatar image data 219 and object image data 221.
The avatar display control module 209 updates the avatar image data 219 based on gestures recognized by the gesture recognition software 206 and other applicable motion identified by the motion module 204. In one example, the image data 219 representing motions or poses can be motion tracking data for the avatar. In one example, such motion tracking data can be stored in a motion capture file which the motion module 204 updates over time as new motion tracking data 205 is received.
The object control module 211 updates image data 221 for objects effected by the user's recognized gestures. Furthermore, the avatar control module 209 and the object control module 211 update their respective image data (219, 221) responsive to instructions from the action control module 210. The action control module 210 supervises the executing application. For example, in a game environment, it keeps score, determines that a new background is needed as the avatar has moved to a new level of game play, determines how other non-user controlled avatars or objects will be placed in a scene. In a non-gaming environment, it identifies what action the user is requesting. For example, if a user gesture is a request to open a file, it can access the file itself or provide instructions to the object control module 211 to access the file and display it on the display 16. The display processing module 207 combines the display data 218 in order to update the display.
Display data 218 provides a scene or view context and defines the other objects in the scene or view. For example, the display data 218 is providing a context environment of a boxing match in
The avatar display control module 209 and the object control module 211 periodically or in response to a message from the motion module 204 read updates to their respective profile(s) 214 and process image data 219, 221 representing motions or poses for the avatar or object. The image data 219, 221 can be rendered locally on a display 16 or it can be transmitted over the Internet 203 or another network.
Examples of avatar profile data 214a can be color image data for features of the avatar such as hair, facial features, skin color, clothing, the position of the avatar in the scene and any props associated with it.
Examples of information which can be stored in the profile 214u of a user can include typical modes of usage or play, age, height, weight information, names, disability, high scores or any other information associated with a user and usage of the system.
An example of factor effecting the selection of a feedback response can include user profile information. For example, the age or disability of a user, physical or mental, can make one type of feedback response more appropriate than another. For example, a 5 year old may not be able to read so subtle feedback in the context of the application may be more appropriate than explicit text on the screen. A player may be deaf so an audio response is not appropriate.
As discussed above, some motions and poses, gestures, have special meaning in the context of an entertainment program or other application, and the display view control system 202 executes instructions to identify them. In particular, the motion module 204 has access to gesture recognition software 206 and data for recognizing or identifying gestures based on a model and motion tracking data.
The gesture recognition software 206 can include gesture filters. In one example, the motion module 204 can select one or more gesture filters 206 based on an associative data index, such as a body part index for example. For example, when a motion tracking data set update is received by the display view control system 202 and motion changes for certain body parts are indicated, the motion module 204 indexes gesture filters associated with those certain body parts.
The gestures filters 206 execute instructions based on parameter data defining criteria for determining whether a particular gesture has been performed based on motion tracking data 205. In one embodiment, each gesture filter 206 is linked with a library module for a particular gesture in a gestures library. Each library module associated with a gesture includes executable instructions to perform processing responsive to the gesture. This processing often involves updating the avatar's motion or image to reflect the gesture in some form.
For example, the teenage boy user 18 in
Some systems may provide a combination of only a certain number of motions or poses that an avatar can perform for certain body parts or regions, for example the hands and arms, while allow direct tracking of other body parts or regions, for example the legs.
The tracking of user motions to update the display of an avatar or other display view objects is performed in real time such that the user may interact with an executing application in real time. A real-time display refers to the display of a visual representation responsive to a gesture, wherein the display is simultaneously or almost simultaneously displayed with the performance of the gesture in physical space. For example, an update rate of the display at which the system may provide a display that echoes a user may be at a rate of 20 Hz or higher, wherein insignificant processing delays result in minimal delay of the display or are not visible at all to the user. Thus, real-time includes any insignificant delays pertaining to the timeliness of data which has been delayed by the time required for automatic data processing.
The tracking feedback software 213 receives a message from the motion module 204 identifying a tracking issue. An example of a tracking issue is that the user has moved out of the field of view or no user is in the field of view. Some other examples of a tracking issue is a gesture for a body part cannot be determined or a threshold probability that a motion or pose corresponds to any particular gesture is not satisfied. Another example of a tracking issue is the loudness of sound is not enough for speech or song recognition software to detect the sound of the user. In another example, the sound from the user has indistinguishable syllables so the words cannot be detected. For example, in a music entertainment application, the inability to identify or detect the user's singing can significantly affect frustration felt by the user player.
The motion module 204 can also provide feedback on a number of distinguishability factors based on data it has received from the audiovisual capture system 20 or in messages from the audiovisual capture system 20. One type of distinguishability factor is audio factors like not enough volume and distinctiveness as mentioned above. Another example of a distinguishability factor is a visibility factor. An example of a visibility factor is the user or a body part for controlling the application being at least partially out of the field of view. Another is lighting issues washing out the user in the image captured. Another example is an obstruction. Another example of a visibility factor is an obstruction. An example of this can be furniture, even in the field of view itself like the coffee table 15 in
The tracking feedback software 213 can select a type of feedback indicating a tracking problem to provide to a user. In some examples, the feedback provides instructions to the user to improve the tracking quality with a visual or audio indicator which is independent of the activity of the application. For example, the explicit screen arrow overlay 33 in
Other feedback can be a bit more implicit and subtle. Visual characteristics of the display can be changed. For example, feedback can be that the sharpness of the display view can be slightly degraded as the user gets within a distance of a boundary of the field of view. It gets blurrier for example. As the user moves toward the center of the field of view, the sharpness of the scene improves. The user may not even consciously notice the change in sharpness. Besides sharpness, another visual characteristic example which can be changed to alter the display of the scene or view is the vibrancy of the color. Another example is the color itself. For example, as the user moves out of the field of view, the color on the display goes to black and white.
In some examples, the tracking feedback software 213 determines feedback in the context of the application is appropriate. Some of these examples have been mentioned above such as a monster on the border the user is too close to or a ball for a user to “hit” being sent in a direction in which the user should move. Other examples would include an enemy to shoot at in a direction to move towards, or another avatar player comes in and bumps the user back toward the center or other characters run in a direction to get back in the field of view. In another example, a sound can be directed in a certain direction to motivate a user to run towards it out away from that direction. If the user is too far away from the image capture system 20 as shown in depth data captured by the system 20, a flashing object or other attention getting display object can be used to attract the user forward into the field of view. If a user is getting to close to the camera 20, e.g. between the field of view front boundary, 30f and the camera, a display object can be made to fill the screen quite a lot to make a user take steps backward.
For this contextual application feedback, the tracking feedback module 213 sends a request to the action control software module 210 for contextual feedback. The action control module 210 or the tracking software 213 can track which contextual feedback techniques have been used to avoid repetition as much as possible. The contextual feedback request can include the type of distinguishability factor to be addressed. It can further include a suggestion for action. For example, it can also include a target zone on the display in which to place an object to motivate movement of a user back in the field of view. In other cases, the action control module 210 can determine the action to take.
The tracking feedback software 213 can base its selection of a feedback response on criteria. As mentioned above, one example of such criteria is the age of the user. For a 3 or 4 year old, placing an item in a target zone on the screen to make that user move may not be too repetitive for a child of that age. For a 7 year old, the specific contextual feedback technique may need to be varied a bit more.
The competitiveness level of the application can be another factor. In a game playing against another user, putting a target to shoot at in a display zone to encouragement movement of the user can be inappropriate. However, placing an explosion near the border of the field of view can make the user move without increasing predictability of targets.
Some applications do not want anything obscuring the action being displayed by the application on the display. In these instances, display 16 or console 12 can include off-screen display devices. For example, the light emitting diode (LED) 32 on the console 12 or on camera 20 can be associated with the user, and the ability of the application to track the user can be indicated by a color palette. For example, green is good. Yellow is a warning that tracking ability is degrading, and red indicates tracking ability has degraded to an unacceptable level. Other off-screen display devices or views can be used such as bar graphs or other lights on the console 12, display 16, or camera 20 indicating the degree of tracking ability.
As a compromise, the application can allow a small icon to appear on a user's avatar for a user who is too near a boundary or whose tracking is not satisfying tracking criteria. The user can select the icon if desired. In another example, a small box, picture in picture, showing the user or his avatar can be displayed indicating a tracking issue and even a suggestion for addressing it. In another example, if the user hits pause, a box showing the user or his avatar with the tracking issue message can be displayed.
In other examples, the tracking feedback module 213 can send a request for a change in appearance of an avatar to the avatar display control module 209. An example of a change in appearance can be a highlighting of the avatar. Other examples include changing visual characteristics such as blurring the avatar's appearance, or making it all black or all white or black and white as opposed to color. The avatar can be made to look faded in another example. Particularly if a training session occurred prior to the start of the application, the user can be instructed that a change of appearance of his avatar means there are tracking problems with his gestures. For field of view issues, this can be effective without changing competitive action too much.
Feedback can be audio or audiovisual. For example, in a training session before start of the activity of the application, a user can be instructed that a particular icon means a particular visibility factor is effecting recognition of the user's gestures. The icon can be accompanied by a particular sound so the user knows it is he who, for example, knows to move inbounds.
As not being able to detect a user's gestures properly can significantly affect execution of the application, pausing the action, perhaps coupled with a change in appearance of the user's avatar, can be a feedback response selected as well. In another example, the sound can be stopped.
In some embodiments, tracking feedback software can be separate from a particular application. For example, the tracking feedback software can provide an API to which an application can send a request for a feedback response. This can be a convenient interface for application developers who wish to use default feedback responses. Of course, other constructs besides an API can be used. The application software can provide additional types of user tracking feedback responses to those provided by the API or which the application developer prefers to use instead of the default mechanisms. In other embodiments, the tracking feedback software can be handled within the application entirely or by application independent software entirely. The technology described herein is not limited to a particular code level implementation. The image capture system 20 recognizes human and non-human targets in a capture area (with or without special sensing devices attached to the subjects), uniquely identifies them and tracks them in three dimensional space.
According to an example embodiment, the image capture system 20 may be configured to capture video with depth information including a depth image that may include depth values via any suitable technique including, for example, time-of-flight, structured light, stereo image, or the like. As shown in
According to another example embodiment, time-of-flight analysis may be used to indirectly determine a physical distance from the capture system 20 to a particular location on the targets or objects by analyzing the intensity of the reflected beam of light over time via various techniques including, for example, shuttered light pulse imaging.
In another example embodiment, the capture system 20 may use a structured light to capture depth information. In such an analysis, patterned light (i.e., light displayed as a known pattern such as grid pattern or a stripe pattern) may be projected onto the capture area via, for example, the IR light component 72. Upon striking the surface of one or more targets or objects in the capture area, the pattern may become deformed in response. Such a deformation of the pattern may be captured by, for example, the 3-D camera 74 and/or the RGB camera 76 and may then be analyzed to determine a physical distance from the capture system to a particular location on the targets or objects.
According to another embodiment, the capture system 20 may include two or more physically separated cameras that may view a capture area from different angles, to obtain visual stereo data that may be resolved to generate depth information.
As an example of the synergy provided by these elements, consider that the IR light component 72 and the 3-D camera 74 may provide a depth image of a capture area, but in certain situations the depth image alone may not be sufficient to discern the position or movement of a human target. In those situations, the RGB camera 76 may “take over” or supplement the information from the 3-D camera to enable a more complete recognition of the human target's movement or position. For example, the RGB camera may be used to recognize, among other things, colors associated with one or more targets. If a user is wearing a shirt with a pattern on it that the depth camera may not be able to detect, the RGB camera may be used to track that pattern and provide information about movements that the user is making. As another example, if a user twists, the RGB camera may be use to supplement the information from one or more other sensors to determine the motion of the user. As a further example, if a user is next to another object such as a wall or a second target, the RGB data may be used to distinguish between the two objects. The RGB camera may also be capable of determining fine features of a user such as facial recognition, hair color and the like which may be used to provide additional information. For example, if a user turns backwards, the RGB camera may use hair color and/or the lack of facial features to determine that a user is facing away from the capture system.
The capture system 20 can capture data at interactive rates, increasing the fidelity of the data and allowing the disclosed techniques to process the raw depth data, digitize the objects in the scene, extract the surface and texture of the object, and perform any of these techniques in real-time such that the display (e.g. 16) can provide a real-time depiction of the scene on its display screen (e.g. 54).
In the system embodiment of
The capture system 20 further includes a memory component 82 for storing instructions that may be executed by the processor 80, as well as image data which may be captured in a frame format. The memory component 82 may include random access memory (RAM), read only memory (ROM), cache, Flash memory, a hard disk, or any other suitable storage component. In one embodiment, the memory component 82 may be a separate component in communication 90 with the image capture component 70 and the processor 80 as illustrated. According to another embodiment, the memory component 82 may be integrated into the processor 80 and/or the image capture component 70.
The capture system 20 further includes a processor 80 communicatively coupled 90 to the image camera component 70 to control it and the memory 82 for storing image data. The processor 80 may include a standardized processor, a specialized processor, a microprocessor, or the like that may execute instructions that may include instructions for storing profiles, receiving depth image data, storing the data in a specified format in memory 82, determining whether a suitable target may be included in the depth image, converting the suitable target into a skeletal representation or other type of model of the target, or any other suitable instruction. Furthermore, some of this processing may be executed by other processors in one or more communicatively coupled computing environments.
The inclusion of processing capabilities in the image capture system 20 enables a model such as a multi-point skeletal model, of a user to be delivered in real-time. In one embodiment, there may be a separate processor for each of multiple components of the capture system, or there may be a single central processor. As another example, there may be a central processor as well as at least one other associated processor. If there is a high cost computing task, the two or more processors may share the processing tasks in any way. The processor(s) may include a memory as described above and the memory may store one or more user profiles. These profiles may store body scans, typical modes of usage or play, age, height, weight information, names, avatars, high scores or any other information associated with a user and usage of the system.
The capture system 20 may further include a microphone 78 which can be used to receive audio signals produced by the user. Thus, in this embodiment, the image capture system 20 is an audiovisual data capture system. The microphone(s) in the capture system may be used to provide additional and supplemental information about a target to enable the system to better discern aspects of the target's position or movement. For example, the microphone(s) may comprise directional microphone(s) or an array of directional microphones that can be used to further discern the position of a human target or to distinguish between two targets. For example, if two users are of similar shape or size and are in a capture area, the microphones may be used to provide information about the users such that the users may be distinguished from each other base, for example, on recognition of their separate voices. As another example, the microphones may be used to provide information to a user profile about the user, or in a ‘speech to text’ type embodiment, where the at least one microphone may be used to create text in a computing system.
Pixel data with depth values for an image is referred to as a depth image. According to one embodiment, the depth image may include a two-dimensional (2-D) pixel area of the captured scene where each pixel in the 2-D pixel area has an associated depth value such as a length or distance in, for example, centimeters, millimeters, or the like of an object in the captured scene from a point of reference, e.g. with respect to some aspect of the camera component 70. For example, the depth values for the pixels may be represented in “Z layers,” which are layers that may be perpendicular to a Z axis extending from the depth camera 70 along its line of sight. These depth values may be referred to collectively as a depth map.
A depth image may be downsampled to a lower processing resolution such that the depth image may be more easily used and/or more quickly processed with less computing overhead. For example, various regions of the observed depth image can be separated into background regions and regions occupied by the image of the target. Background regions can be removed from the image or identified so that they can be ignored during one or more subsequent processing steps. Additionally, one or more high-variance and/or noisy depth values may be removed and/or smoothed from the depth image. Portions of missing and/or removed depth information may be filled in and/or reconstructed. Such backfilling may be accomplished by averaging nearest neighbors, filtering, and/or any other suitable method. Other suitable processing may be performed such that the depth information may used to generate a model such as a skeletal model.
A graphics processing unit (GPU) 108 and a video encoder/video codec (coder/decoder) 114 form a video processing pipeline for high speed and high resolution graphics processing. Data is carried from the graphics processing unit 108 to the video encoder/video codec 114 via a bus. The video processing pipeline outputs data to an A/V (audio/video) port 140 for transmission to a television or other display. A memory controller 110 is connected to the GPU 108 to facilitate processor access to various types of memory 112, such as, but not limited to, a RAM (Random Access Memory).
The multimedia console 12 includes an I/O controller 120, a system management controller 122, an audio processing unit 123, a network interface controller 124, a first USB host controller 126, a second USB controller 128 and a front panel I/O subassembly 130 that are implemented on a module 118. The USB controllers 126 and 128 serve as hosts for peripheral controllers 142(1)-142(2), a wireless adapter 148, and an external memory device 146 (e.g., flash memory, external CD/DVD ROM drive, removable media, etc.). The network interface 124 and/or wireless adapter 148 provide access to a network (e.g., the Internet, home network, etc.) and may be any of a wide variety of various wired or wireless adapter components including an Ethernet card, a modem, a Bluetooth module, a cable modem, and the like.
System memory 143 is provided to store application data that is loaded during the boot process. A media drive 144 is provided and may comprise a DVD/CD drive, hard drive, or other removable media drive, etc. The media drive 144 may be internal or external to the multimedia console 100. Application data may be accessed via the media drive 144 for execution, playback, etc. by the multimedia console 12. The media drive 144 is connected to the I/O controller 120 via a bus, such as a Serial ATA bus or other high speed connection (e.g., IEEE 1394).
In one embodiment, a copy of the software and data for the display view control system 202 can be stored on media drive 144 and can be loaded into system memory 143 when executing.
The system management controller 122 provides a variety of service functions related to assuring availability of the multimedia console 12. The audio processing unit 123 and an audio codec 132 form a corresponding audio processing pipeline with high fidelity and stereo processing. Audio data is carried between the audio processing unit 123 and the audio codec 132 via a communication link. The audio processing pipeline outputs data to the A/V port 140 for reproduction by an external audio player or device having audio capabilities.
The front panel I/O subassembly 130 supports the functionality of the power button 150 and the eject button 152, as well as any LEDs (light emitting diodes) or other indicators exposed on the outer surface of the multimedia console 12. A system power supply module 136 provides power to the components of the multimedia console 12. A fan 138 cools the circuitry within the multimedia console 12.
The CPU 101, GPU 108, memory controller 110, and various other components within the multimedia console 100 are interconnected via one or more buses, including serial and parallel buses, a memory bus, a peripheral bus, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures can include a Peripheral Component Interconnects (PCI) bus, PCI-Express bus, etc.
When the multimedia console 12 is powered ON, application data may be loaded from the system memory 143 into memory 112 and/or caches 102, 104 and executed on the CPU 101. The application may present a graphical user interface that provides a consistent user experience when navigating to different media types available on the multimedia console 100. In operation, applications and/or other media contained within the media drive 144 may be launched or played from the media drive 144 to provide additional functionalities to the multimedia console 12.
The multimedia console 12 may be operated as a standalone system by simply connecting the system to a television or other display. In this standalone mode, the multimedia console 12 allows one or more users to interact with the system, watch movies, or listen to music. However, with the integration of broadband connectivity made available through the network interface 124 or the wireless adapter 148, the multimedia console 12 may further be operated as a participant in a larger network community.
When the multimedia console 12 is powered ON, a set amount of hardware resources are reserved for system use by the multimedia console operating system. These resources may include a reservation of memory (e.g., 16 MB), CPU and GPU cycles (e.g., 5%), networking bandwidth (e.g., 8 kbs), etc. Because these resources are reserved at system boot time, the reserved resources do not exist from the application's view.
In particular, the memory reservation is large enough to contain the launch kernel, concurrent system applications and drivers. The CPU reservation is constant such that if the reserved CPU usage is not used by the system applications, an idle thread will consume any unused cycles.
With regard to the GPU reservation, lightweight messages generated by the system applications (e.g., popups) are displayed by using a GPU interrupt to schedule code to render popup into an overlay. The amount of memory required for an overlay depends on the overlay area size and the overlay scales with screen resolution. Where a full user interface is used by the concurrent system application, it is preferable to use a resolution independent of application resolution. A scaler may be used to set this resolution such that the need to change frequency and cause a TV resynch is eliminated.
After the multimedia console 12 boots and system resources are reserved, concurrent system applications execute to provide system functionalities. The system functionalities are encapsulated in a set of system applications that execute within the reserved system resources described above. The operating system kernel identifies threads that are system application threads versus gaming application threads. The system applications are scheduled to run on the CPU 101 at predetermined times and intervals in order to provide a consistent system resource view to the application. The scheduling is to minimize cache disruption for the gaming application running on the console.
When a concurrent system application requires audio, audio processing is scheduled asynchronously to the gaming application due to time sensitivity. A multimedia console application manager (described below) controls the gaming application audio level (e.g., mute, attenuate) when system applications are active.
Input devices (e.g., controllers 142(1) and 142(2)) are shared by gaming applications and system applications. The input devices are not reserved resources, but are to be switched between system applications and the gaming application such that each will have a focus of the device. The application manager controls the switching of input stream, without knowledge the gaming application's knowledge and a driver maintains state information regarding focus switches. The image capture system 20 may define additional input devices for the console 12 (e.g. for its camera system).
Computer 310 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 310 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by computer 310. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer readable media.
The system memory 330 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 331 and random access memory (RAM) 332. A basic input/output system 333 (BIOS), containing the basic routines that help to transfer information between elements within computer 310, such as during start-up, is typically stored in ROM 331. RAM 332 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 320. By way of example, and not limitation,
The computer 310 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
In one embodiment, a copy of the software and data for the display view control system 202 can be stored in the applications programs 345 and program data 347 stored on the hard drive 238 or remotely (e.g. 248). A copy can also be loaded as an application program 226 and program data 228 in system memory 222 when executing.
A user may enter commands and information into the computer 20 through input devices such as a keyboard 362 and pointing device 361, commonly referred to as a mouse, trackball or touch pad. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 320 through a user input interface 360 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A monitor 391 or other type of display device is also connected to the system bus 321 via an interface, such as a video interface 390. In addition to the monitor, computers may also include other peripheral output devices such as speakers 397 and printer 396, which may be connected through a output peripheral interface 390.
The computer 310 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 380. The remote computer 380 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 310, although only a memory storage device 381 has been illustrated in
When used in a LAN networking environment, the computer 310 is connected to the LAN 371 through a network interface or adapter 370. When used in a WAN networking environment, the computer 310 typically includes a modem 372 or other means for establishing communications over the WAN 373, such as the Internet. The modem 372, which may be internal or external, may be connected to the system bus 321 via the user input interface 360, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 310, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
Consoles 400A-X may invoke user login service 408, which is used to authenticate and identify a user on consoles 400A-X. During login, login service 408 obtains a gamer tag (a unique identifier associated with the user) and a password from the user as well as a console identifier that uniquely identifies the console that the user is using and a network path to the console. The gamer tag and password are authenticated by comparing them to a global user profile database 416, which may be located on the same server as user login service 408 or may be distributed on a different server or a collection of different servers. Once authenticated, user login service 408 stores the console identifier and the network path in the global user profile database 416 so that messages and information may be sent to the console.
In an embodiment, consoles 400A-X may include a gaming service 410, a sharing service 412, user sharing data 428 and a substitution database 418. The gaming service may allow users to play online interactive games, create and share gaming environments for joint game play between consoles, and provide other services such as an online marketplace, centralized achievement tracking across various games and other shared experience functions. A sharing service 412 allows users to share game play elements with other users. For example, a user on a console 400x may create elements for use in games and share them or sell them to other users. In addition, a user may record elements of the game play experience, such as a movie of a race or various scenes in a game, and share them with other users. Information provided by users for sharing or sale may be stored in the user sharing data 428.
The global user profile database 416 may include information about all the users on consoles 400A-X such as the users' account information and a console identifier that uniquely identifies a particular console that each user is using. The global user profile database 416 may also include user preference information associated with all the users on consoles 400A-X. The global user profile database 416 may also include information about users such as game records and a friends list associated with users.
Tracking feedback software 414 may be provided in the gaming service 404. The tracking feedback software can respond to tracking issues with gestures and movements in game play elements uploaded to the server and stored in user sharing data 428.
Any number of networked processing devices may be provided in accordance with a gaming system as provided in
The method embodiments of
The motion module 204 can provide scores or weights or some value representative of the quality of certain visibility factors for the test capture area to the tracking feedback software 213 for it to compare with tracking criteria based on visibility factors. In one embodiment, a weighting algorithm can then be applied to these factors to determine a score. Some examples of the factors include the location of the user's body part for the gesture in the field of view of the capture system, the lighting in the capture area, and obstructions of the body part.
The different capture areas are rated or scored, and the best one is recommended 818. In another alternative, the image capture system 20 is rotated through a range of angles capturing different views in a room, and suggesting as a capture area the one that provides the best tracking ability. The capture area with the best tracking ability can be displayed on the display 16 via the display processing module 207 to identify it for the user.
The motion module 204 may not be able to match a gesture with the movement the user is making, and can send a message to the tracking feedback software 213 indicating so. The motion module 204 can also indicate a visibility factor which is contributing to user tracking criteria not being satisfied. For example, the representative quality value for the visibility factor lighting can indicate it is poor.
Responsive to the user tracking criteria not being satisfied, the tracking feedback software 213 determines how one or more of the visibility factors can be improved for tracking, and outputs 820 feedback to the user identifying at least one change in the capture area to improve a visibility factor. The feedback can be outputted via visual display or as an audio message.
The tracking feedback software 213 determines 812 a tracking quality score for the test capture area. The motion module 204 can provide scores or weights or some value representative of the quality of certain visibility factors for the test capture area to the tracking feedback software 213 which can apply a weighting algorithm to these factors to determine a score.
The tracking feedback software 213 determines 814 whether there is another test capture area to be tested. For example, it requests user input via the display as to whether there is another test capture area. If there is another test capture area, the steps are repeated for this test area. If there is not another test capture area, the tracking feedback software 213 displays a recommendation of the test capture area with the best visibility score. For example, the test capture area with the best score is displayed on the display. If there is another capture area, the software 213 displays 802 instructions to direct the camera's field of view to the next capture area and repeat the steps above.
The foregoing detailed description has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the technology disclosed to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The described embodiments were chosen in order to best explain the principles of the technology and its practical application to thereby enable others skilled in the art to best utilize the technology in various embodiments and with various modifications as are suited to the particular use contemplated. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
This application claims priority as a continuation-in-part of U.S. patent application Ser. No. 12/434,553 entitled “Binding Users to a Gesture Based System and Providing Feedback to the Users,” having inventors Alex Kipman, Kathryn Stone Perez, R. Stephen Polzin, and William Guo, filed on May 1, 2009 and which is hereby specifically incorporated by reference herein. U.S. patent application Ser. No. 12/788,731 entitled “Active Calibration of a Natural User Interface,” having inventor Kenneth Lobb, filed May 27, 2010, is hereby specifically incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
4627620 | Yang | Dec 1986 | A |
4630910 | Ross et al. | Dec 1986 | A |
4645458 | Williams | Feb 1987 | A |
4695953 | Blair et al. | Sep 1987 | A |
4702475 | Elstein et al. | Oct 1987 | A |
4711543 | Blair et al. | Dec 1987 | A |
4751642 | Silva et al. | Jun 1988 | A |
4796997 | Svetkoff et al. | Jan 1989 | A |
4809065 | Harris et al. | Feb 1989 | A |
4817950 | Goo | Apr 1989 | A |
4843568 | Krueger et al. | Jun 1989 | A |
4893183 | Nayar | Jan 1990 | A |
4901362 | Terzian | Feb 1990 | A |
4925189 | Braeunig | May 1990 | A |
5101444 | Wilson et al. | Mar 1992 | A |
5148154 | MacKay et al. | Sep 1992 | A |
5184295 | Mann | Feb 1993 | A |
5229754 | Aoki et al. | Jul 1993 | A |
5229756 | Kosugi et al. | Jul 1993 | A |
5239463 | Blair et al. | Aug 1993 | A |
5239464 | Blair et al. | Aug 1993 | A |
5288078 | Capper et al. | Feb 1994 | A |
5295491 | Gevins | Mar 1994 | A |
5320538 | Baum | Jun 1994 | A |
5347306 | Nitta | Sep 1994 | A |
5385519 | Hsu et al. | Jan 1995 | A |
5405152 | Katanics et al. | Apr 1995 | A |
5417210 | Funda et al. | May 1995 | A |
5423554 | Davis | Jun 1995 | A |
5454043 | Freeman | Sep 1995 | A |
5469740 | French et al. | Nov 1995 | A |
5495576 | Ritchey | Feb 1996 | A |
5515079 | Hauck | May 1996 | A |
5516105 | Eisenbrey et al. | May 1996 | A |
5524637 | Erickson et al. | Jun 1996 | A |
5534917 | MacDougall | Jul 1996 | A |
5563988 | Maes et al. | Oct 1996 | A |
5577981 | Jarvik | Nov 1996 | A |
5580249 | Jacobsen et al. | Dec 1996 | A |
5594469 | Freeman et al. | Jan 1997 | A |
5597309 | Riess | Jan 1997 | A |
5616078 | Oh | Apr 1997 | A |
5617312 | Iura et al. | Apr 1997 | A |
5638300 | Johnson | Jun 1997 | A |
5641288 | Zaenglein | Jun 1997 | A |
5682196 | Freeman | Oct 1997 | A |
5682229 | Wangler | Oct 1997 | A |
5690582 | Ulrich et al. | Nov 1997 | A |
5703367 | Hashimoto et al. | Dec 1997 | A |
5704837 | Iwasaki et al. | Jan 1998 | A |
5715834 | Bergamasco et al. | Feb 1998 | A |
5813863 | Sloane et al. | Sep 1998 | A |
5875108 | Hoffberg et al. | Feb 1999 | A |
5877803 | Wee et al. | Mar 1999 | A |
5913727 | Ahdoot | Jun 1999 | A |
5933125 | Fernie | Aug 1999 | A |
5980256 | Carmein | Nov 1999 | A |
5989157 | Walton | Nov 1999 | A |
5995649 | Marugame | Nov 1999 | A |
6005548 | Latypov et al. | Dec 1999 | A |
6009210 | Kang | Dec 1999 | A |
6054991 | Crane et al. | Apr 2000 | A |
6057909 | Yahav et al. | May 2000 | A |
6066075 | Poulton | May 2000 | A |
6072494 | Nguyen | Jun 2000 | A |
6073489 | French et al. | Jun 2000 | A |
6075895 | Qiao et al. | Jun 2000 | A |
6077201 | Cheng et al. | Jun 2000 | A |
6098458 | French et al. | Aug 2000 | A |
6100517 | Yahav et al. | Aug 2000 | A |
6100896 | Strohecker et al. | Aug 2000 | A |
6101289 | Kellner | Aug 2000 | A |
6124862 | Boyken et al. | Sep 2000 | A |
6128003 | Smith et al. | Oct 2000 | A |
6130677 | Kunz | Oct 2000 | A |
6141463 | Covell et al. | Oct 2000 | A |
6147678 | Kumar et al. | Nov 2000 | A |
6152856 | Studor et al. | Nov 2000 | A |
6159100 | Smith | Dec 2000 | A |
6173066 | Peurach et al. | Jan 2001 | B1 |
6181343 | Lyons | Jan 2001 | B1 |
6188777 | Darrell et al. | Feb 2001 | B1 |
6215890 | Matsuo et al. | Apr 2001 | B1 |
6215898 | Woodfill et al. | Apr 2001 | B1 |
6226396 | Marugame | May 2001 | B1 |
6229913 | Nayar et al. | May 2001 | B1 |
6241612 | Heredia | Jun 2001 | B1 |
6256033 | Nguyen | Jul 2001 | B1 |
6256400 | Takata et al. | Jul 2001 | B1 |
6283860 | Lyons et al. | Sep 2001 | B1 |
6289112 | Jain et al. | Sep 2001 | B1 |
6299308 | Voronka et al. | Oct 2001 | B1 |
6308565 | French et al. | Oct 2001 | B1 |
6316934 | Amorai-Moriya et al. | Nov 2001 | B1 |
6363160 | Bradski et al. | Mar 2002 | B1 |
6384819 | Hunter | May 2002 | B1 |
6411744 | Edwards | Jun 2002 | B1 |
6430997 | French et al. | Aug 2002 | B1 |
6476834 | Doval et al. | Nov 2002 | B1 |
6496598 | Harman | Dec 2002 | B1 |
6498628 | Iwamura | Dec 2002 | B2 |
6502515 | Burckhardt et al. | Jan 2003 | B2 |
6503195 | Keller et al. | Jan 2003 | B1 |
6512838 | Rafii et al. | Jan 2003 | B1 |
6539931 | Trajkovic et al. | Apr 2003 | B2 |
6570555 | Prevost et al. | May 2003 | B1 |
6633294 | Rosenthal et al. | Oct 2003 | B1 |
6640202 | Dietz et al. | Oct 2003 | B1 |
6661918 | Gordon et al. | Dec 2003 | B1 |
6674877 | Jojic et al. | Jan 2004 | B1 |
6681031 | Cohen et al. | Jan 2004 | B2 |
6714665 | Hanna et al. | Mar 2004 | B1 |
6731799 | Sun et al. | May 2004 | B1 |
6738066 | Nguyen | May 2004 | B1 |
6765726 | French et al. | Jul 2004 | B2 |
6767287 | Mcquaid et al. | Jul 2004 | B1 |
6771277 | Ohba | Aug 2004 | B2 |
6784901 | Harvey et al. | Aug 2004 | B1 |
6788809 | Grzeszczuk et al. | Sep 2004 | B1 |
6801637 | Voronka et al. | Oct 2004 | B2 |
6809726 | Kavanagh | Oct 2004 | B2 |
6873723 | Aucsmith et al. | Mar 2005 | B1 |
6876496 | French et al. | Apr 2005 | B2 |
6937742 | Roberts et al. | Aug 2005 | B2 |
6950534 | Cohen et al. | Sep 2005 | B2 |
7003134 | Covell et al. | Feb 2006 | B1 |
7006236 | Tomasi et al. | Feb 2006 | B2 |
7036094 | Cohen et al. | Apr 2006 | B1 |
7038855 | French et al. | May 2006 | B2 |
7039676 | Day et al. | May 2006 | B1 |
7042440 | Pryor et al. | May 2006 | B2 |
7050177 | Tomasi et al. | May 2006 | B2 |
7050606 | Paul et al. | May 2006 | B2 |
7058204 | Hildreth et al. | Jun 2006 | B2 |
7060957 | Lange et al. | Jun 2006 | B2 |
7113918 | Ahmad et al. | Sep 2006 | B1 |
7121946 | Paul et al. | Oct 2006 | B2 |
7151530 | Roeber et al. | Dec 2006 | B2 |
7170492 | Bell | Jan 2007 | B2 |
7184048 | Hunter | Feb 2007 | B2 |
7202898 | Braun et al. | Apr 2007 | B1 |
7222078 | Abelow | May 2007 | B2 |
7224384 | Iddan et al. | May 2007 | B1 |
7227526 | Hildreth et al. | Jun 2007 | B2 |
7259747 | Bell | Aug 2007 | B2 |
7293356 | Sohn et al. | Nov 2007 | B2 |
7308112 | Fujimura et al. | Dec 2007 | B2 |
7310431 | Gokturk et al. | Dec 2007 | B2 |
7317836 | Fujimura et al. | Jan 2008 | B2 |
7327376 | Shen et al. | Feb 2008 | B2 |
7340077 | Gokturk et al. | Mar 2008 | B2 |
7348963 | Bell | Mar 2008 | B2 |
7359121 | French et al. | Apr 2008 | B2 |
7367887 | Watabe et al. | May 2008 | B2 |
7379563 | Shamaie | May 2008 | B2 |
7379566 | Hildreth | May 2008 | B2 |
7389591 | Jaiswal et al. | Jun 2008 | B2 |
7412077 | Li et al. | Aug 2008 | B2 |
7421093 | Hildreth et al. | Sep 2008 | B2 |
7430312 | Gu | Sep 2008 | B2 |
7436496 | Kawahito | Oct 2008 | B2 |
7450736 | Yang et al. | Nov 2008 | B2 |
7452275 | Kuraishi | Nov 2008 | B2 |
7460690 | Cohen et al. | Dec 2008 | B2 |
7483049 | Aman et al. | Jan 2009 | B2 |
7489812 | Fox et al. | Feb 2009 | B2 |
7536032 | Bell | May 2009 | B2 |
7536034 | Rhoads et al. | May 2009 | B2 |
7555142 | Hildreth et al. | Jun 2009 | B2 |
7560701 | Oggier et al. | Jul 2009 | B2 |
7570805 | Gu | Aug 2009 | B2 |
7574020 | Shamaie | Aug 2009 | B2 |
7576727 | Bell | Aug 2009 | B2 |
7577925 | Zotov et al. | Aug 2009 | B2 |
7590262 | Fujimura et al. | Sep 2009 | B2 |
7593552 | Higaki et al. | Sep 2009 | B2 |
7598942 | Underkoffler et al. | Oct 2009 | B2 |
7607509 | Schmiz et al. | Oct 2009 | B2 |
7620202 | Fujimura et al. | Nov 2009 | B2 |
7656396 | Bosch et al. | Feb 2010 | B2 |
7668340 | Cohen et al. | Feb 2010 | B2 |
7680298 | Roberts et al. | Mar 2010 | B2 |
7683954 | Ichikawa et al. | Mar 2010 | B2 |
7684592 | Paul et al. | Mar 2010 | B2 |
7701439 | Hillis et al. | Apr 2010 | B2 |
7702130 | Im et al. | Apr 2010 | B2 |
7704135 | Harrison, Jr. | Apr 2010 | B2 |
7710391 | Bell et al. | May 2010 | B2 |
7729530 | Antonov et al. | Jun 2010 | B2 |
7746345 | Hunter | Jun 2010 | B2 |
7760182 | Ahmad et al. | Jul 2010 | B2 |
7809167 | Bell | Oct 2010 | B2 |
7834846 | Bell | Nov 2010 | B1 |
7852262 | Namineni et al. | Dec 2010 | B2 |
RE42256 | Edwards | Mar 2011 | E |
7898522 | Hildreth et al. | Mar 2011 | B2 |
7905779 | Dyke et al. | Mar 2011 | B2 |
8004503 | Zotov et al. | Aug 2011 | B2 |
8035612 | Bell et al. | Oct 2011 | B2 |
8035614 | Bell et al. | Oct 2011 | B2 |
8035624 | Bell et al. | Oct 2011 | B2 |
8072470 | Marks | Dec 2011 | B2 |
8111239 | Pryor et al. | Feb 2012 | B2 |
8127236 | Hamilton et al. | Feb 2012 | B2 |
8139110 | Nishihara | Mar 2012 | B2 |
8230367 | Bell et al. | Jul 2012 | B2 |
8294105 | Alameh et al. | Oct 2012 | B2 |
20020075334 | Yfantis | Jun 2002 | A1 |
20020126090 | Kirkpatrick et al. | Sep 2002 | A1 |
20030003991 | Kuraishi | Jan 2003 | A1 |
20030218761 | Tomasi et al. | Nov 2003 | A1 |
20040005924 | Watabe et al. | Jan 2004 | A1 |
20040155962 | Marks | Aug 2004 | A1 |
20040178994 | Kairls, Jr. | Sep 2004 | A1 |
20040179038 | Blattner et al. | Sep 2004 | A1 |
20040207597 | Marks | Oct 2004 | A1 |
20050059488 | Larsen et al. | Mar 2005 | A1 |
20050183035 | Ringel et al. | Aug 2005 | A1 |
20050215319 | Rigopulos et al. | Sep 2005 | A1 |
20060035710 | Festejo et al. | Feb 2006 | A1 |
20060046847 | Hashimoto | Mar 2006 | A1 |
20060110008 | Vertegaal et al. | May 2006 | A1 |
20060154725 | Glaser et al. | Jul 2006 | A1 |
20060188144 | Sasaki et al. | Aug 2006 | A1 |
20060239558 | Rafii et al. | Oct 2006 | A1 |
20060258427 | Rowe et al. | Nov 2006 | A1 |
20070013718 | Ohba | Jan 2007 | A1 |
20070060336 | Marks et al. | Mar 2007 | A1 |
20070060383 | Dohta | Mar 2007 | A1 |
20070098222 | Porter et al. | May 2007 | A1 |
20070120834 | Boillot | May 2007 | A1 |
20070178973 | Camhi | Aug 2007 | A1 |
20070195067 | Zotov et al. | Aug 2007 | A1 |
20070200927 | Krenik | Aug 2007 | A1 |
20070216894 | Garcia et al. | Sep 2007 | A1 |
20070218994 | Goto et al. | Sep 2007 | A1 |
20070260984 | Marks et al. | Nov 2007 | A1 |
20070279485 | Ohba et al. | Dec 2007 | A1 |
20070283296 | Nilsson | Dec 2007 | A1 |
20070298882 | Marks et al. | Dec 2007 | A1 |
20080001951 | Marks et al. | Jan 2008 | A1 |
20080026838 | Dunstan et al. | Jan 2008 | A1 |
20080030460 | Hildreth et al. | Feb 2008 | A1 |
20080040692 | Sunday et al. | Feb 2008 | A1 |
20080062257 | Corson | Mar 2008 | A1 |
20080081701 | Shuster | Apr 2008 | A1 |
20080100620 | Nagai et al. | May 2008 | A1 |
20080113767 | Nguyen et al. | May 2008 | A1 |
20080126937 | Pachet | May 2008 | A1 |
20080134102 | Movold et al. | Jun 2008 | A1 |
20080141181 | Ishigaki et al. | Jun 2008 | A1 |
20080146302 | Olsen et al. | Jun 2008 | A1 |
20080152191 | Fujimura et al. | Jun 2008 | A1 |
20080200287 | Marty et al. | Aug 2008 | A1 |
20080215972 | Zalewski et al. | Sep 2008 | A1 |
20080215973 | Zalewski et al. | Sep 2008 | A1 |
20080244468 | Nishihara et al. | Oct 2008 | A1 |
20080261693 | Zalewski | Oct 2008 | A1 |
20080274804 | Harrison et al. | Nov 2008 | A1 |
20090005141 | Lehtiniemi et al. | Jan 2009 | A1 |
20090027337 | Hildreth | Jan 2009 | A1 |
20090063307 | Groenovelt et al. | Mar 2009 | A1 |
20090077504 | Bell et al. | Mar 2009 | A1 |
20090085864 | Kutliroff et al. | Apr 2009 | A1 |
20090133051 | Hildreth | May 2009 | A1 |
20090138805 | Hildreth | May 2009 | A1 |
20090141933 | Wagg | Jun 2009 | A1 |
20090167679 | Klier et al. | Jul 2009 | A1 |
20090183125 | Magal et al. | Jul 2009 | A1 |
20090209343 | Foxlin et al. | Aug 2009 | A1 |
20090221368 | Yen et al. | Sep 2009 | A1 |
20090221374 | Yen et al. | Sep 2009 | A1 |
20090228841 | Hildreth | Sep 2009 | A1 |
20090237490 | Nelson, Jr. | Sep 2009 | A1 |
20090270169 | Kondo | Oct 2009 | A1 |
20090298650 | Kutliroff | Dec 2009 | A1 |
20090313584 | Kerr et al. | Dec 2009 | A1 |
20090319459 | Breazeal et al. | Dec 2009 | A1 |
20100039500 | Bell et al. | Feb 2010 | A1 |
20100053304 | Underkoffler et al. | Mar 2010 | A1 |
20100220064 | Griffin et al. | Sep 2010 | A1 |
20100235786 | Maizels et al. | Sep 2010 | A1 |
20110009194 | Gabai et al. | Jan 2011 | A1 |
20110225534 | Wala | Sep 2011 | A1 |
20110306397 | Fleming et al. | Dec 2011 | A1 |
20110306398 | Boch et al. | Dec 2011 | A1 |
Number | Date | Country |
---|---|---|
1648840 | Aug 2005 | CN |
1828630 | Sep 2006 | CN |
101206715 | Jun 2008 | CN |
101254344 | Jun 2010 | CN |
0583061 | Feb 1994 | EP |
08044490 | Feb 1996 | JP |
H10207619 | Aug 1998 | JP |
11272293 | Oct 1999 | JP |
2003030686 | Jan 2003 | JP |
2007241797 | Sep 2007 | JP |
2007267858 | Oct 2007 | JP |
2008052590 | Mar 2008 | JP |
2009535167 | Oct 2009 | JP |
2010059465 | Mar 2010 | JP |
2010082386 | Apr 2010 | JP |
2011189066 | Sep 2011 | JP |
I316196 | Oct 2009 | TW |
9310708 | Jun 1993 | WO |
9717598 | May 1997 | WO |
9915863 | Apr 1999 | WO |
9944698 | Sep 1999 | WO |
0159975 | Jan 2002 | WO |
02082249 | Oct 2002 | WO |
03001722 | Mar 2003 | WO |
03046706 | Jun 2003 | WO |
03073359 | Nov 2003 | WO |
03054683 | Dec 2003 | WO |
03071410 | Mar 2004 | WO |
2008049151 | May 2008 | WO |
2009059065 | May 2009 | WO |
Entry |
---|
Foody, “A Prototype Sourceless Kinematic-Feedback Based Video Game for Movement Based Exercise”, Proceedings of the 28th IEEE Engineering in Medicine and Biology Society Annual International Conference, Aug. 30-Sep. 3, 2006, pp. 5366-5369, New York, USA. |
Charbonneau, “Poster: RealDance An Exploration of 3D Spatial Interfaces for Dancing Games”, IEEE Symposium on 3D User Interfaces, Mar. 14-15, 2009, pp. 141142, Lafayette, LA, USA. |
Jää-Aro, “Reconsidering the Avatar: From User Mirror to Interaction Locus”, Doctoral Thesis, Mar. 2004, pp. 1-170, Stockholm, Sweden. |
Höllerer, “User Interface Management Techniques for Collaborative Mobile Augmented Reality”, Computers and Graphics,Oct. 2001, pp. 799-810, vol. 25, No. 5, Elsevier Science Ltd. |
U.S. Appl. No. 12/788,731, filed May 27, 2010. |
Office Action dated May 22, 2012, U.S. Appl. No. 12/788,731, filed May 27, 2010. |
Toyama, Kentaro, et al., “Probabilistic Tracking in a Metric Space,” Eighth International Conference on Computer Vision, Vancouver, Canada, vol. 2, Jul. 2001, 8 pages. |
Amendment filed Oct. 22, 2012, U.S. Appl. No. 12/788,731, filed May 27, 2010. |
Office Action dated Dec. 19, 2012, U.S. Appl. No. 12/788,731, filed May 27, 2010. |
Response to Office Action dated Mar. 19, 2013, U.S. Appl. No. 12/788,731, filed May 27, 2010. |
Amendment dated May 29, 2014, in Chinese Patent Appl. No. 201110150690.5 filed May 26, 2011. |
Office Action dated Apr. 30, 2014, in U.S. Appl. No. 12/788,731, filed May 27, 2010. |
Amendment dated Jan. 11, 2013, in Chinese Patent Appl. No. 201110150690.5 filed May 26, 2011. |
Office Action dated Mar. 14, 2014, in Chinese Patent Appl. No. 201110150690.5 filed May 26, 2011. |
Kanade et al., “A Stereo Machine for Video-rate Dense Depth Mapping and Its New Applications”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1996, pp. 196-202,The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA. |
Miyagawa et al., “CCD-Based Range Finding Sensor”, Oct. 1997, pp. 1648-1652, vol. 44 No. 10, IEEE Transctions on Electron Devices. |
Rosenhahn et al., “Automatic Human Model Generation”, 2005, pp. 41-48, University of Auckland (CITR), New Zealand. |
Aggarwal et al., “Human Motion Analysis: A Review”, IEEE Nonrigid and Articulated Motion Workshop, 1997, University of Texas at Austin, Austin, TX. |
Shao et al., “An Open System Architecture for a Multimedia and Multimodal User Interface”, Aug. 24, 1998, Japanese Society for Rehabilitation of Persons with Disabilities (JSRPD), Japan. |
Kohler, “Special Topics of Gesture Recognition Applied in Intelligent Home Environments”, In Proceedings of the Gesture Workshop, 1998, pp. 285-296, Germany. |
Kohler, “Vision Based Remote Control in Intelligent Home Environments”, University of Erlangen-Nuremberg/Germany, 1996, pp. 147-154, Germany. |
Kohler, “Technical Details and Ergonomical Aspects of Gesture Recognition applied in Intelligent Home Environments”, 1997, Germany. |
Hasegawa et al., “Human-Scale Haptic Interaction with a Reactive Virtual Human in a Real-Time Physics Simulator”, Jul. 2006, vol. 4, No. 3, Article 6C, ACM Computers in Entertainment, New York, NY. |
Qian et al., “A Gesture-Driven Multimodal Interactive Dance System”, Jun. 2004, pp. 1579-1582, IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan. |
Zhao, “Dressed Human Modeling, Detection, and Parts Localization”, 2001, The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA. |
He, “Generation of Human Body Models”, Apr. 2005, University of Auckland, New Zealand. |
Isard et al., “CONDENSATION—Conditional Density Propagation for Visual Tracking”, 1998, pp. 5-28, International Journal of Computer Vision 29(1), Netherlands. |
Livingston, “Vision-based Tracking with Dynamic Structured Light for Video See-through Augmented Reality”, 1998, University of North Carolina at Chapel Hill, North Carolina, USA. |
Wren et al., “Pfinder: Real-Time Tracking of the Human Body”, MIT Media Laboratory Perceptual Computing Section Technical Report No. 353, Jul. 1997, vol. 19, No. 7, pp. 780-785, IEEE Transactions on Pattern Analysis and Machine Intelligence, Caimbridge, MA. |
Breen et al., “Interactive Occlusion and Collusion of Real and Virtual Objects in Augmented Reality”, Technical Report ECRC-95-02, 1995, European Computer-Industry Research Center GmbH, Munich, Germany. |
Freeman et al., “Television Control by Hand Gestures”, Dec. 1994, Mitsubishi Electric Research Laboratories, TR94-24, Caimbridge, MA. |
Hongo et al., “Focus of Attention for Face and Hand Gesture Recognition Using Multiple Cameras”, Mar. 2000, pp. 156-161, 4th IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France. |
Pavlovic et al., “Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review”, Jul. 1997, pp. 677-695, vol. 19, No. 7, IEEE Transactions on Pattern Analysis and Machine Intelligence. |
Azarbayejani et al., “Visually Controlled Graphics”, Jun. 1993, vol. 15, No. 6, IEEE Transactions on Pattern Analysis and Machine Intelligence. |
Granieri et al., “Simulating Humans in VR”, The British Computer Society, Oct. 1994, Academic Press. |
Brogan et al., “Dynamically Simulated Characters in Virtual Environments”, Sep./Oct. 1998, pp. 2-13, vol. 18, Issue 5, IEEE Computer Graphics and Applications. |
Fisher et al., “Virtual Environment Display System”, ACM Workshop on Interactive 3D Graphics, Oct. 1986, Chapel Hill, NC. |
“Virtual High Anxiety”, Tech Update, Aug. 1995, pp. 22. |
Sheridan et al., “Virtual Reality Check”, Technology Review, Oct. 1993, pp. 22-28, vol. 96, No. 7. |
Stevens, “Flights into Virtual Reality Treating Real World Disorders”, The Washington Post, Mar. 27, 1995, Science Psychology, 2 pages. |
“Simulation and Training”, 1994, Division Incorporated. |
Gedikili et al., “An Adaptive Vision System for Tracking Soccer Players from Variable Camera Settings,” Intelligent Autonomous Systems Group, 10 pages, downloaded at htlp://www9.cs.tum.edu/papers/pdf/gedikli07adaptive.pdf on Feb. 10, 2009. |
“Slots and Video Gaming,” International Game Technology, 2005, pp. 1-86, downloaded at http://media.igt.com/Marketing/PromotionalLiterature/IntroductionToGaming.pdf. |
“Larry Kless's Weblog,” Dec. 2008, pp. 1-99, downloaded at http://klessblog.blogspot.com/2008—12—01—archive.html. |
Kane, B., “Postcard from Siggraph 2005: Beyond the Gamepad,” Think Services, Aug. 19, 2005, pp. 1-5, downloaded at http://www.gamasutra.com/view/feature/2378/postcard—from—siggraph—2005—php. |
Feng, W., “What's Next for Networked Games?” NetGames 2007 keynote talk, Sep. 19-20, 2007, 74 pages, downloaded at http://www.thefengs.com/wuchang/work/cstrike/netgames07—keynote.pdf. |
Shivappa et al., “Person Tracking with Audio-Visual Cues Using Iterative Decoding Framework”, IEEE Fifth International Conference on Advanced Video and Signal Based Surveillance, AVSS '08, Santa Fe, NM, Sep. 1-3, 2008, 260-267. |
Morris et al., Beyond Social Protocols: Multi-User Coordination Policies for Co-located Groupware, Mitsubishi Electric Research Laboratories, Jan. 2004. |
PCT Application No. PCT/US20101032975 : International Search Report and Written Opinion of the International Searching Authority, Jan. 7, 2011, 8 pages. |
Ahn et al., “Large Display Interaction using Video Avatar and Hand Gesture Recognition”, Imaging Media Research Center, KIST, Sep. 2004, 8 pages. |
Golomidov, “Human Detect Ion in Video”, Apr. 18, 2008, 13 pages. |
Karaulova et al., “A Hierarchical Model of Dynamics for Tracking People with a Single Video Camera”, Sep. 2000, 10 pages. |
Kartz et al., “On Hand Tracking & Gesture Recognition”, Wiizards: 3D Gesture Recognition for Game Play, www.krd-haptics.blogspot.com-2008-03-wiizards-3d-gesture-recognition-forhtml, Mar. 19, 2008, 1-2. |
Manninen, “Interaction Manifestations in Multi-player Games”, Being There: Concepts, effects and measurement of user presence in synthetic environments, Chapter 20, Amsterdam, The Netherlands, Apr. 2003, 10 pages. |
Morris et al., “Cooperative Gestures: Multi-User Gestural Interactions for Co-located Groupware”, CHI 2006, Apr. 22-28, 2006, 10 pages. |
Pers et al., “Computer Vision System for Tracking Players in Sports Games”, First Int'l Workshop on Image and Signal Processing and Analysis, Pula, Croatia, Jun. 14-15, 2000, 6 pages. |
Office Action dated Jul. 2, 2013, in Chinese Patent Appl. No. 201110150690.5 filed Dec. 19, 2007. |
Amendment dated Dec. 29, 2014, in U.S. Appl. No. 12/788,731, filed May 27, 2010. |
Office Action dated Aug. 12, 2014, in Chinese Patent Appl. No. 201110150690.5 filed May 26, 2011. |
Amendment dated Oct. 27, 2014, in Chinese Patent Appl. No. 201110150690.5 filed May 26, 2011. |
Amendment dated Sep. 2, 2014, in U.S. Appl. No. 12/788,731, filed May 27, 2010. |
Office Action dated Sep. 29, 2014, in U.S. Appl. No. 12/788,731, filed May 27, 2010. |
Response to Office Action, and partial English translation, filed May 15, 2015 in Chinese Patent Application No. 201180030882.0. |
Response to Office Action filed Jun. 2, 2015 in U.S. Appl. No. 12/788,731. |
“First Office Action and Search Report Received for Chinese Patent Application No. 201180030882.0”, and partial translation, Mailed Date: Jan. 14, 2015, 17 pages. (MS# 329360.05). |
“Office Action Received for Japanese Patent Application No. 2013-516608”, and partial English translation, Mailed Date: May 20, 2015, 11 pages. (MS# 329360.07). |
Office Action dated Jan. 14, 2015, in Chinese Patent Appl. No. 201180030882.0 filed Jun. 14, 2011. |
Office Action dated Mar. 4, 2015, in U.S. Appl. No. 12/788,731, filed May 27, 2010. |
Voluntary Submission of Information dated Mar. 30, 2015, in Canadian Patent Appl. No. 2,800,538 filed Jun. 14, 2011. |
Office Action dated Feb. 15, 2015, in Chinese Patent Appl. No. 201110150690.5 filed May 26, 2011. |
Response to Office Action, with partial English translation, filed May 4, 2015 in Chinese Patent Appl. No. 201110150690.5 filed May 26, 2011. |
Fifth Office Action dated Jul. 31, 2015 in Chinese Patent Application No. 201110150690.5, with partial English Translation. |
Response to Office Action filed Aug. 5, 2015 in Japanese Patent Application No. 2013-516608, and partial English translation. |
“Office Action and Search Report Received for Taiwan Patent Application No. 100117828”, Mailed Date: Sep. 7, 2015, 11 Pages. (MS# 329360.03). |
“Second Office Action Received for Chinese Patent Application No. 201180030882.0”, Mailed Date: Sep. 11, 2015, 12 Pages. (MS# 329360.05). |
“Office Action Issued in Japanese Patent Application No. 2013-516608”, Mailed Date: Nov. 16, 2015, 8 Pages. (MS# 329360.07). |
“Third Office Action Issued in Chinese Patent Application No. 201180030882.0”, Mailed Date: Feb. 1, 2016, 7 Pages. (MS# 329360.05). |
Office Action dated Sep. 25, 2015 in U.S. Appl. No. 12/788,731. |
Response to Office Action filed Oct. 28, 2015 in Chinese Patent Application No. 2011800308820, and partial English translation. |
Response to Office Action filed Oct. 14, 2015 in Chinese Patent Application No. 2011101506905, and partial English translation. |
“Office Action and Search Report Issued in Taiwan Patent Application No. 100117828”, Mailed Date: Feb. 23, 2016, 15 Pages. |
“Office Action Issued in European Patent Application No. 11798639.8”, Mailed Date: Mar. 18, 2016, 6 Pages. |
“Supplementary Search Report Issued in European Patent Application No. 11798639.8”, Mailed Date: Feb. 24, 2016, 3 Pages. |
Response to Office Action tiled Jan. 13, 2016, with English translation of claims as amended, in Taiwan Patent Application No. 100117828, 19 pages. |
Response to Office Action tiled Feb. 1, 2016, with English translation of claims as amended, in Japanese Patent Application No. 2013-516608, 12 pages. |
Office Action, with partial English translation, mailed Feb. 19, 2016 in Chinese Patent Application No. 201110150690.5, 6 pages. |
Response to Office Action, with partial English translation, tiled Mar. 22, 2016 in Chinese Patent Application No. 201180030882.0, 80 pages. |
Notice of Allowance dated Apr. 25, 2016 in Japanese Patent Application No. 2013-516608, 4 pages. |
Notice of Allowance dated Apr. 13, 2016 in Chinese Patent Application No. 201180030882.0, 10 pages. |
“Office Action Issued in Taiwan Patent Application No. 100117828”, with English language Summary of Office Action, Mailed Date: Nov. 8, 2016, 4 Pages. |
“Office Action Issued in Canadian Patent Application No. 2,800,538”, Mailed Date: Jan. 12, 2017, 6 Pages. |
Response to Office Communication filed Apr. 22, 2016 in European Patent Application No. 11 798 639.8, 14 pages. |
Summons to attend oral proceedings in European Patent Application No. 11798639.8, dated Nov. 28, 2016, 6 pages. |
Response to Summons to attend oral proceedings in European Patent Application No. 11798639.8, filed Aug. 23, 2017, 29 pages. |
“Office Action Issued in Canadian Patent Application No. 2800538”, dated Sep. 6, 2017, 9 Pages. |
Response to Summons to attend oral proceedings in European Patent Application No. 11798639.8, filed Sep. 28, 2017, 19 pages. |
Number | Date | Country | |
---|---|---|---|
20100277411 A1 | Nov 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12434553 | May 2009 | US |
Child | 12820954 | US |